The emergence of generative AI has enabled cybercriminals to launch sophisticated scams, leveraging large language models like ChatGPT to create convincing emails and deepfakes. Since its release in late 2022, ChatGPT has been exploited by attackers to automate vulnerability scans and craft hyperrealistic phishing campaigns1. This has led to a significant surge in AI-driven scams, leaving many organizations struggling to keep pace. The use of AI in scamming has become increasingly prevalent, with attackers adopting AI for various malicious activities. As a result, cybersecurity professionals must be vigilant in detecting and mitigating these threats. The rise of AI-driven scams matters to practitioners because it demands a proactive approach to security, one that acknowledges the evolving nature of cyber threats and the need for innovative countermeasures.