A critical vulnerability was discovered in OpenAI Codex, a tool that generates code, which could have been exploited to compromise GitHub tokens, potentially granting unauthorized access to sensitive repositories. The vulnerability, if exploited, would have allowed attackers to gain control of GitHub tokens, enabling them to access and manipulate code repositories. This vulnerability highlights the risks associated with relying on artificial intelligence-powered coding tools, particularly when integrated with popular development platforms like GitHub. The fact that such a vulnerability existed in a widely-used tool like OpenAI Codex raises concerns about the security of AI-generated code1. This matters to developers and security practitioners because compromised GitHub tokens can lead to significant security breaches, emphasizing the need for rigorous testing and validation of AI-powered coding tools to prevent such vulnerabilities.