Developers rely on lightweight libraries to handle complex tasks like evaluating mathematical expressions within user inputs.
A newly disclosed vulnerability in the popular npm package expr-eval, however, could turn these tools into gateways for remote code execution, putting AI-driven applications at serious risk.
The expr-eval library, with over 250 dependent packages on npm, parses and evaluates mathematical expressions safely as an alternative to JavaScript’s risky eval() function.
It’s particularly valuable in generative AI systems, such as those built with LangChain’s JavaScript implementation, where models interpret math-heavy prompts from users.
For instance, NLP tools processing scientific queries or AI chatbots solving equations depend on it to avoid direct code injection.
A related fork, expr-eval-fork, emerged to fix earlier issues like prototype pollution, but both projects now face this critical flaw.
Disclosed on November 7, 2025, by the CERT Coordination Center (VU#263614) and tracked as CVE-2025-12735, the vulnerability stems from the Parser class’s evaluate() method.
Attackers can craft malicious inputs that define arbitrary functions in the context object, bypassing safety checks.
This allows injection of code to run system commands, access local files, or steal sensitive data especially dangerous in server-side environments where AI apps process untrusted user data.
The impact is severe, earning a “Total” technical impact rating under the SSVC framework from CERT. In AI and NLP contexts, this means adversaries could hijack models to execute malware, manipulate outputs, or exfiltrate training data.
Imagine a healthcare AI evaluating patient formulas: a tainted input could compromise entire systems. With the original repository appearing unmaintained, the risk amplifies for projects still on vulnerable versions.
Fortunately, mitigation is straightforward. Developers should apply the patch from Pull Request #288 on the expr-eval GitHub repo, which introduces an allowlist of safe functions and requires explicit registration for custom ones.
Upgrading to expr-eval-fork version 3.0.0 also resolves the issue. Tools like npm audit can flag affected dependencies, and teams handling AI workloads should prioritize scanning.
Responsible disclosure credits go to reporter Jangwoo Choe of UKO, with support from GitHub Security and npm advisories.
As AI applications proliferate, this incident underscores the need for vigilant dependency management.
Unpatched libraries can undermine even the most innovative tech stacks, reminding developers that security must evolve alongside capability.





