Adversaries have been observed exploiting large language models (LLMs) using a novel technique known as web-based indirect prompt injection, where attackers embed malicious prompts within hidden web content to deceive AI agents. This method allows attackers to manipulate LLMs into generating fraudulent responses, which can be used for high-impact fraud. The attackers leverage the fact that LLMs can be tricked into processing hidden content, such as HTML comments or invisible text, to inject malicious prompts. This technique has been observed in the wild, with real-world attacks demonstrating its potential for exploitation. The vulnerability of LLMs to indirect prompt injection attacks poses a significant risk, as it can be used to generate convincing phishing emails, fake news articles, or other types of malicious content. Researchers at Palo Alto Unit42 have identified this emerging threat and highlighted the need for developers to implement robust security measures to prevent such attacks1. The implications of this vulnerability are far-reaching, and its exploitation can have severe consequences for individuals and organizations that rely on LLMs. So what matters most to practitioners is that they must now consider the potential for indirect prompt injection attacks when designing and deploying LLM-based systems.