A critical flaw in the Claude Google Chrome Extension, developed by Anthropic, has been discovered, allowing any website to inject malicious prompts without user interaction. This vulnerability enables zero-click XSS prompt injection, effectively bypassing user consent. The issue arises from the extension's ability to accept and process prompts from any website, potentially leading to the execution of unauthorized actions. According to Koi Security researcher Oren Yomtov, the flaw permits silent injection of prompts, making it seem as though the user initiated them1. The vulnerability underscores the security risks associated with large language model (LLM) developments, particularly those integrated into popular browsers like Google Chrome. As LLMs continue to advance, their security implications will likely become more pronounced, posing significant challenges for developers and users alike. The discovery of this flaw highlights the need for rigorous testing and security audits to mitigate such risks, making it essential for practitioners to prioritize secure development practices.