AI systems are being increasingly deployed in critical domains such as healthcare, finance, and defense, despite their vulnerability to adversarial attacks. Traditional AI red teaming approaches require manual workflows, which can take operators weeks to craft, only to potentially need rebuilding if results are inadequate. A new approach to AI red teaming is emerging, one that can reduce the time required from weeks to hours1. This shift is crucial as AI becomes more pervasive in high-stakes industries. The old method of manually assembling attacks, transforms, and scorers is no longer sufficient, given the rapid evolution of AI threats. By streamlining the red teaming process, operators can more effectively test and validate the security of AI systems, which is essential for preventing potentially disastrous consequences. This development matters to security practitioners because it enables them to respond more quickly and effectively to emerging threats.