Reinforcement learning (RL) can enhance large language models' (LLMs) reasoning capabilities, particularly for long-horizon tasks, by leveraging expressiveness as a key factor. Researchers have introduced ScaleLogic, a synthetic framework that allows for controlled manipulation of task difficulty along two primary axes: proof planning depth and logical complexity. This framework enables a systematic examination of how RL training scales with increasing task difficulty, addressing a significant gap in current research1. By applying RL to LLMs, developers can potentially improve their performance on complex, multi-step reasoning tasks. However, this advancement also raises important security considerations, as LLMs' increased capabilities can introduce new risks. The integration of RL and LLMs has significant implications for the security landscape, as it can reshape both the capability and risk surfaces of these models. Therefore, understanding the impact of RL on LLMs' reasoning abilities is crucial for practitioners seeking to mitigate potential security threats.