A critical RCE vulnerability just dropped in LangChain Core (CVE-2025-68664). The blast radius? Over 847 million downloads according to pepy.tech.1 The attack vector? Unsanitized deserialization that lets attackers inject malicious objects through LLM outputs and prompt injection.
This isn't your typical supply chain vulnerability. This is what happens when 96% of organizations pretend shift-left security works.
The LangGrinch Exploit: Serialization Gone Wrong
CVE-2025-68664 exploits a serialization flaw in langchain-core's dumps() and dumpd() functions. Attackers inject dictionaries containing the reserved 'lc' marker key, which the framework treats as trusted LangChain objects during deserialization.
The damage? Arbitrary object instantiation that enables:
Secret exfiltration from environment variables (previously enabled by default via
secrets_from_env=True)SSRF to attacker-controlled endpoints for blind data exfiltration
Object instantiation within pre-approved namespaces (langchain_core, langchain_openai, langchain_aws, langchain_anthropic)
Potential RCE through Jinja2 template rendering in PromptTemplate objects
CNA scored this 9.3 Critical. CWE-502: Deserialization of Untrusted Data. Twelve distinct vulnerable flows identified, including astream_events(version="v1"), Runnable.astream_log(), RunnableWithMessageHistory, and various caching mechanisms.
The real kicker? LLM response fields like additional_kwargs and response_metadata can be manipulated via prompt injection, then serialized and deserialized through normal framework operations like streaming events, message history, caches, and logs. One weaponized prompt cascades through your entire orchestration pipeline. The vulnerability sat undetected in langchain-core for over two and a half years.
The Shift-Left Playbook (Guess What? It Doesn't Scale)
Here's what traditional security tells you to do:
Scan dependencies and identify every service running vulnerable langchain-core versions
Open tickets across engineering teams
Prioritize based on CVSS scores and internet exposure
Schedule patching sprints and hope for resource availability
Cross your fingers that threat actors don't weaponize the exploit before Q2
When resources run out, add it to the backlog and call it "accepted risk"
By the time you've mapped impact across your microservices architecture, assessed which deployments touch secrets, coordinated patches across platform teams, and navigated sprint planning politics, attackers have already moved. They don't wait for your retrospectives.
That's the shift-left fantasy. Manual security doesn't scale to modern velocity. That's why 96% of organizations carry CVE debt they'll never eliminate.
Root's Automated Remediation: Zero Human Latency
Root detected CVE-2025-68664 and automatically patched it across customer container environments the moment disclosure hit. No scanning. No tickets. No sprint ceremonies. No backlog bargaining.
What actually happened:
Root's threat intelligence identified the deserialization vulnerability in langchain-core
Automated remediation engine deployed patches to affected containers
Vulnerable packages upgraded across all workloads (production, staging, CI/CD)
Applications continued running without service interruption
Zero human latency. Zero CVE debt. Zero excuses.
Root doesn't wait for security teams to triage vulnerabilities into Jira. It doesn't create tickets that get deprioritized against feature work. It remediates automatically the moment CVEs are disclosed, whether your containers are running in production clusters, ephemeral staging environments, or CI/CD pipelines.
While shift-left security creates spreadsheets, Root eliminates the vulnerability.
Why AI Supply Chain Risk Breaks Manual Security
LangGrinch exposes the fundamental problem with shift-left security in AI infrastructure: agentic orchestration pipelines create trust boundary problems that traditional AppSec workflows weren't designed to handle and can't scale to address.
When LLM outputs can influence serialized metadata that gets deserialized in privileged contexts with access to secrets, you're dealing with attack surfaces that move faster than sprint planning. The velocity of AI framework evolution and vulnerability disclosure now outpaces any manual security process by orders of magnitude.
Shift-left promised developers would become security experts. Spoiler: they didn't, and they shouldn't have to. Automated remediation exists because manual security theater can't scale to this reality.
Want to watch Root handle critical CVEs in your AI stack in real time? Book a demo and see automated remediation in action.
Root automatically remediates vulnerabilities in container images and running workloads. No code changes, no manual patching, no security debt. We can fix that.
Footnotes
Cyata Research, "All I Want for Christmas is Your Secrets: LangGrinch hits LangChain Core," December 2025. Public package telemetry via pepy.tech showing ~847M total downloads as of late December 2025.
Continue Reading













