Researchers have discovered a significant vulnerability in large language models, exploiting their stateless nature to launch multi-turn attacks through a technique called Transient Turn Injection (TTI)1. This method involves distributing adversarial intent across isolated interactions, leveraging automated attacker agents powered by large language models. By doing so, attackers can systematically evade moderation and manipulate the model's responses. The implications of this vulnerability are far-reaching, as large language models are increasingly integrated into sensitive workflows, raising concerns about their adversarial robustness and safety. The TTI technique highlights the need for more robust security measures to protect against such attacks. This vulnerability matters to practitioners because it underscores the importance of developing more secure and stateful large language models to prevent potential exploits, which can have significant consequences for security, policy, and workforce dynamics.