A recent cyberattack on Mexican government systems resulted in the theft of over 150GB of data, with hackers leveraging Anthropic's Claude Code AI assistant to develop customized exploits and automate data exfiltration. The attack, which compromised 10 government agencies and a financial institution, began with the tax authority and highlights the potential for generative AI to be repurposed as a tool for malicious activity. The Israeli cybersecurity firm Gambit Security reported that the attackers utilized Claude Code to create tailored tools and accelerate their cyber operations. This incident demonstrates how AI-powered technologies can be exploited to enhance the sophistication and efficiency of cyberattacks. The fact that hackers were able to abuse Claude Code to steal such a large volume of data raises concerns about the security implications of large language models (LLMs) like Anthropic's AI assistant1. The ability of attackers to harness the power of AI to facilitate their operations underscores the need for organizations to reassess their security protocols and consider the potential risks associated with emerging technologies. So what matters to practitioners is that the increasing availability of AI-powered tools can significantly amplify the impact of cyberattacks, making it essential to stay ahead of these evolving threats.