The release of ChatGPT in late 2022 marked a significant turning point in the evolution of cybercrime, as criminals began leveraging large language models to generate sophisticated malicious emails1. These AI-powered attacks have enabled cybercriminals to supercharge their operations, producing high-volume spam and targeted phishing emails designed to steal sensitive information. The use of AI tools has also allowed attackers to create more convincing and personalized attacks, increasing the likelihood of success. As a result, the threat landscape has become increasingly complex, with AI-driven attacks becoming a major concern for security professionals. The ability of AI models to produce human-seeming text has made it more challenging to distinguish legitimate emails from malicious ones, emphasizing the need for advanced security measures to combat these emerging threats. This development matters to practitioners because it highlights the urgent need to develop and implement effective countermeasures against AI-powered cyberattacks.