Researchers have made a significant breakthrough in discovering state-of-the-art adversarial attack algorithms for Large Language Models (LLMs) using an autoresearch pipeline powered by Claude Code. This pipeline enables the autonomous discovery of novel white-box adversarial attack algorithms that outperform existing methods. The discovered algorithms demonstrate a substantial improvement over 30-plus existing methods, highlighting the potential of autoresearch in advancing AI research and engineering. The use of Claude Code, an LLM agent capable of writing code and conducting autonomous research, has led to the identification of previously unknown vulnerabilities in LLMs1. This development has significant implications for the field of AI security, as it underscores the importance of continually assessing and improving the robustness of LLMs against adversarial attacks. The discovery of these algorithms matters to practitioners because it highlights the need for ongoing research into the security of LLMs to prevent potential exploits.