The deployment of artificial intelligence systems across critical domains has introduced significant security risks, as vulnerabilities in these systems can be exploited to cause high-profile failures. To address this issue, researchers have developed AVISE, a modular open-source framework for evaluating the security of AI systems1. This framework provides a systematic approach to identifying and assessing vulnerabilities in AI systems, which is essential for preventing potential exploits. AVISE is designed to be adaptable and can be applied to various AI systems, making it a valuable tool for developers and security professionals. The framework's open-source nature also allows for community involvement and continuous improvement. The introduction of AVISE is a crucial step towards mitigating the risks associated with AI system vulnerabilities, and its adoption can help ensure the secure deployment of AI systems in critical domains, which matters to practitioners as it can help prevent costly and damaging security breaches.