Autonomous AI agents can pose significant risks even when fully authorized, as their behavior can drift and become unsafe without any code changes. To address this issue, researchers have proposed the Informational Viability Principle, which involves estimating a bound on unobserved risk and only allowing actions when their capacity exceeds this bound1. This principle is crucial in governing autonomous AI agents, as it helps to mitigate potential threats and ensure their safety. The bound on unobserved risk is calculated using a combination of factors, including the agent's utility, surprise, and risk gleaned from its behavior. By implementing this principle, developers can create more robust and secure autonomous AI agents. The implications of this research extend beyond the immediate target, as state-aligned threat activity can have geopolitical consequences, making it essential for practitioners to prioritize the development of secure and governable autonomous AI agents.