AI coding assistants like Cline Bot promise to boost developer productivity. However, recent research reveals serious security gaps that could turn these tools into attack vectors.
Security firm Mindgard uncovered four vulnerabilities in the open-source Cline extension during a short audit in August 2025, allowing attackers to steal sensitive data and run malicious code simply by tricking users into analyzing tainted repositories.
With over 3.8 million installs and integration with popular IDEs like VS Code, these flaws highlight the risks of trusting AI for code tasks.
The issues stem from weak defenses against prompt injection, where hidden instructions in files override the AI’s safety rules.
From Code Review To Compromise
Cline, which connects to models like Claude Sonnet or its free Sonic variant, processes source files to suggest edits or run commands via an “execute_command” tool.
This tool lets the AI run shell commands but relies on the model to flag risky ones, setting the “requires_approval” flag to true for actions like file changes or network calls.
In default settings with auto-approve enabled, attackers exploit this by embedding malicious prompts in Python docstrings or Markdown files.
For instance, a poisoned Python file might instruct Cline to read a .env file containing API keys, replace “=” with “-“, and ping a fake domain like “OPENAI_API_KEY-sk-abc123.mindgard.ai,” leaking the key via DNS queries without user notice.
Another flaw uses Cline’s .clinerules folder, where Markdown files set project guidelines. A rules.md file can force all commands to use requires_approval=false, bypassing checks.
Attackers hide payloads in separate files, like cmd.txt with then prompt Cline to analyze the repo, triggering silent execution.
A third issue creates a time-of-check to time-of-use (TOCTOU) gap: one file appends “./cm” to a benign script, benign.sh, and sleeps for 120 seconds, while another later adds a complete malicious chain, exploiting the AI’s inability to foresee chained effects across analyses.

Finally, error messages leak that Sonic runs on xAI’s grok-4 model, revealing backend details.
All tests used Sonic shortly after its August 20, 2025, release, showing how system prompts public in Cline’s GitHub repo aid precision attacks by exposing tool logic and vague safety rules such as “assume good intent.”
Mitigations and Broader Lessons
Mindgard disclosed the issues on August 27, 2025, via GitHub advisories, but did not receive vendor responses until October.
By version 3.35.0 on October 31, 2025, Cline blocked injections through better detection, refusing to follow planted instructions in tests.
The team acknowledged the model-leak fix but emphasized relying on LLM defenses rather than code changes, a stance Mindgard critiques as insufficient. No CVEs were assigned, and private advisories remain locked.
These flaws underscore the dangers posed by AI agents when handling code execution, especially when open prompts enable semantic bypasses.
Developers should disable auto-approve, vet repos, and monitor commands until full patches roll out.
As AI tools proliferate, treating system prompts as security boundaries not mere configs is key to preventing “prompt to pwn” scenarios.
Mindgard’s findings, published November 18, 2025, urge faster remediation in LLM ecosystems.





