The release of ChatGPT in late 2022 marked a significant turning point in the evolution of cybercrime, as criminals began leveraging large language models to generate sophisticated malicious emails1. These AI-powered attacks have enabled cybercriminals to supercharge their operations, producing high-volume spam and targeted phishing emails designed to steal sensitive information. The use of AI tools has also allowed attackers to create more convincing and personalized attacks, increasing the likelihood of success. As a result, the threat landscape has become increasingly complex, with AI-driven attacks becoming a major concern for security professionals. The ability of AI models to produce human-seeming text has made it more challenging to distinguish legitimate emails from malicious ones, emphasizing the need for advanced security measures to combat these emerging threats. This development matters to practitioners because it highlights the urgent need to develop and implement effective countermeasures against AI-powered cyberattacks.
Supercharged scams
⚡ High Priority
Why This Matters
When ChatGPT was released to the public in late 2022, it opened people’s eyes to how easily generative AI could churn out vast amounts of human-seeming text from simple prompts.
References
- MIT Tech Review AI. (2026, April 21). Supercharged scams. *MIT Technology Review*. https://www.technologyreview.com/2026/04/21/1135647/supercharged-scams-ai-artificial-intelligence/
Original Source
MIT Tech Review AI
Read original →