Large language models powering AI agents are susceptible to indirect prompt injection attacks, where malicious instructions embedded in untrusted data can trigger hazardous actions. To counter this, researchers propose implementing system-level defenses, focusing on dynamic replanning and security policy updates to prevent such attacks1. This approach involves continually reassessing and adapting an AI agent's plans and security protocols to mitigate potential threats. By doing so, the agent can detect and respond to indirect prompt injection attacks more effectively. The proposed defense strategy also emphasizes the importance of integrating security measures at the system level, rather than relying solely on individual model updates. This holistic approach can help ensure the reliability and trustworthiness of AI agents in various applications. The development of such system-level defenses is crucial for protecting AI systems from increasingly sophisticated attacks, making it essential for practitioners to prioritize the implementation of these measures.