Reinforcement learning training for large language models is hindered by inefficient rollout phases, which can consume up to 70% of total training time when generating lengthy trajectories of 16,000 tokens or more. To address this bottleneck, researchers have introduced SortedRL, an online length-aware scheduling method that accelerates RL training by optimizing the rollout process1. By streamlining this phase, SortedRL enables more efficient training of LLMs, particularly for tasks that require extended chain-of-thought generation. The development of SortedRL has significant implications for the advancement of LLM capabilities, which in turn can reshape both the capability and risk surfaces of these models. As LLMs continue to evolve, their security implications will become increasingly important to consider. The acceleration of RL training through SortedRL will likely have a profound impact on the field, making it essential for practitioners to stay informed about the latest developments and their potential security consequences.