A critical vulnerability in Google's Antigravity AI agent manager has been discovered, which could have allowed attackers to escape the sandbox and execute remote code. The flaw, now patched, involved a combination of prompt injection and Antigravity's file-creation capability, granting attackers remote code execution privileges1. This vulnerability is particularly concerning as organizations increasingly adopt agentic AI for their business and IT operations, expanding their potential attack surface. The research, conducted by Pillar Security, highlights the importance of rigorous testing and security protocols for AI-powered tools. The fact that this bug was able to bypass Antigravity's sandboxing measures raises questions about the effectiveness of current security controls in preventing such exploits. So what matters to practitioners is that this vulnerability underscores the need for continuous monitoring and patching of AI-powered tools to prevent potential security breaches.
Vuln in Google’s Antigravity AI agent manager could escape sandbox, give attackers remote code execution
⚡ High Priority
Why This Matters
The bug, since patched, combined prompt injection with Antigravity’s permitted file-creation capability to grant attackers remote code execution privileges.
References
- Pillar Security. (2026, April 20). Vuln in Google’s Antigravity AI agent manager could escape sandbox, give attackers remote code execution. CyberScoop. https://cyberscoop.com/google-antigravity-pillar-security-agent-sandbox-escape-remote-code-execution/
Original Source
CyberScoop
Read original →