Cybersecurity agencies from the US and its Five Eyes allies have issued joint guidance on securely deploying autonomous artificial intelligence systems, citing insufficient safeguards in critical infrastructure and defense sectors. The guidance specifically targets agentic AI, which utilizes large language models to make decisions and take actions independently, and requires connections to external tools to function. This type of AI is particularly vulnerable to exploitation due to its autonomous nature and potential for unchecked interactions with external systems. The agencies warn that the rapid deployment of agentic AI in sensitive sectors poses significant cybersecurity risks, emphasizing the need for organizations to prioritize secure deployment practices1. So what this means for practitioners is that they must now consider the security implications of autonomous AI systems as a core concern, rather than an afterthought, in order to mitigate potential risks.