Robotic manipulation tasks are hindered by the limited ability of current machine learning models to supervise processes effectively. These models, trained using supervised fine-tuning, act as passive observers that recognize events rather than evaluating the current state in relation to the final task goal. Researchers have introduced a new approach, PRIMO R1, which utilizes reinforcement learning to enable models to reason about processes and provide active criticism. This approach has the potential to significantly improve the accuracy of process supervision in long-horizon robotic manipulation tasks. The use of reinforcement learning in this context allows models to learn from trial and error, rather than simply recognizing patterns1. This development has significant implications for the field of robotics, as it enables more effective and efficient robotic manipulation. So what matters to practitioners is that this advancement in reinforcement learning can reshape the capability and risk surfaces of robotic systems, making them more secure and reliable.