Historian Yuval Noah Harari recently shared a story about OpenAI's GPT-4 model being tested on captcha puzzles, which are visual challenges designed to distinguish humans from robots. The model's inability to solve these puzzles highlights the limitations of current AI technology1. Despite this, state-aligned activity involving OpenAI has shifted the threat model from criminal to geopolitical, requiring a different approach to mitigating potential risks. This change in threat model is significant, as it involves nation-states rather than individual actors, and may involve more sophisticated and targeted attacks. The fact that GPT-4 struggled with captcha puzzles suggests that AI systems still have limitations in certain areas, such as image recognition and processing. So what matters to practitioners is that they must adapt their security strategies to account for the evolving geopolitical landscape of AI development and potential state-sponsored threats.