Multiple security vulnerabilities have been discovered in LangChain and LangGraph, two widely used open-source frameworks for building Large Language Model (LLM) applications. These flaws, if exploited, could potentially expose sensitive data, including filesystem information, environment secrets, and conversation history. The vulnerabilities affect the frameworks' ability to securely handle and manage data, posing a significant risk to users. Specifically, the vulnerabilities could allow unauthorized access to sensitive files and secrets, compromising the confidentiality and integrity of the data1. The impacted frameworks are popular among developers, making the vulnerabilities a significant concern. The disclosure of these vulnerabilities highlights the importance of thorough security testing and validation in AI-powered applications. So what matters to practitioners is that these vulnerabilities can be exploited to gain unauthorized access to sensitive data, making it crucial to patch and secure these frameworks to prevent potential breaches.