Robotic agents can now adapt in real-time to unforeseen changes during operation thanks to a new framework that leverages online continual reinforcement learning with world model feedback. This approach, inspired by biological systems, enables automated adaptation during deployment, overcoming the limitations of traditional offline-trained controllers with fixed parameters. By building on the DreamerV3 model-based reinforcement learning algorithm, the framework allows for self-adapting robotic agents that can respond to new situations without requiring manual updates. The implications of this development are significant, as it shifts the threat model associated with reinforcement learning from a criminal context to a geopolitical one1. This means that the security playbook must be revised to account for the potential consequences of autonomous systems that can adapt and learn in real-time. So what matters to practitioners is that they must reassess their security strategies to address the emerging risks posed by self-adapting robotic agents.