The core technology of AI agents has a fatal flaw... issuing LangChain "LangGrinch" alert

TechubNews

A serious security vulnerability has been discovered in the core library “LangChain Core,” which plays a central role in AI agent operations. The issue has been named “LangGrinch,” allowing attackers to steal sensitive information within AI systems. This vulnerability poses a long-term threat to the security foundations of numerous AI applications and is warning the entire industry.

AI security startup Cyata Security disclosed this vulnerability, assigned the identifier CVE-2025-68664, and rated it as high risk with a CVSS score of (CVSS)9.3. The core of the problem lies in the internal auxiliary functions contained in LangChain Core, which may misjudge user input as trusted objects during serialization and deserialization processes. Attackers can exploit “prompt injection” techniques to manipulate structured outputs generated by the proxy, inserting internal marker keys that are subsequently treated as trusted objects.

LangChain Core serves as a hub among many AI agent frameworks, with tens of millions of downloads in the past 30 days. Overall, its total downloads have exceeded 847 million. Considering applications connected to the entire LangChain ecosystem, experts believe the impact of this vulnerability will be widespread.

Cyata security researcher Yarden Porat explained, “What makes this vulnerability particularly unique is that it is not just a simple deserialization issue but occurs within the serialization pathway itself. The process of storing, streaming, or subsequently restoring structured data generated by AI prompts exposes a new attack surface.” Cyata confirmed that under a single prompt, there are 12 explicit attack paths that can lead to different scenarios.

Once an attack is launched, it can cause remote HTTP requests to leak environment variables containing high-value information such as cloud credentials, database access URLs, vector database information, LLM API keys, and more. Of particular concern is that this vulnerability is a structural flaw inherent to LangChain Core itself, requiring no third-party tools or external integrations. Cyata warns that this represents a “threat existing within the ecosystem pipeline layer.”

Security patches to fix this issue have been released alongside LangChain Core versions 1.2.5 and 0.3.81. Cyata notified the LangChain operations team prior to public disclosure, and it is reported that the team responded immediately and took measures to strengthen long-term security.

Shahar Tal, co-founder and CEO of Cyata, emphasized, “As AI systems are being deployed across industrial sites, the question of what permissions the system will ultimately consume has become a security issue more critical than code execution itself. In architectures based on proxy identity recognition, minimizing permissions and impact radius must be fundamental design principles.”

This incident is expected to serve as an opportunity for the AI industry—whose focus is gradually shifting from manual intervention to proxy-based automation—to reflect on fundamental security design principles.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments