Researchers have identified a novel method to detect multi-turn attacks on large language models (LLMs) by analyzing the activation patterns in the model's residual stream. This approach, termed latent adversarial detection, leverages the fact that each phase of a multi-turn attack - including trust-building, pivoting, and escalation - leaves a distinct signature in the activation levels. By measuring the total path length of these activations, it is possible to distinguish between benign and malicious conversations, even when individual turns appear harmless1. This is particularly significant in the context of state-aligned threat activity, where the implications of a successful attack extend far beyond the immediate target. The ability to detect such attacks has significant implications for cybersecurity, as it raises the stakes from criminal activity to geopolitical maneuvering, so what matters most to practitioners is the potential to bolster defenses against these sophisticated threats.