Researchers at Tenable have uncovered seven critical vulnerabilities in OpenAI’s ChatGPT, affecting both GPT-4o and the newly launched GPT-5 models.
These flaws expose users to sophisticated attacks that can steal private data from chat histories and memories without any user interaction true zero-click exploits.
With hundreds of millions relying on large language models daily, this discovery highlights the fragility of AI safety mechanisms and the urgent need for stronger defenses.
Understanding ChatGPT’s Architecture and Weak Points
ChatGPT operates through a system prompt that includes user memories, web browsing tools, and search capabilities.
Memories store personal details across sessions, while the web tool uses a separate model called SearchGPT for isolation.
However, Tenable found that indirect prompt injections malicious instructions hidden in external sources like websites or comments can bypass these safeguards.
Attackers manipulate SearchGPT to alter responses, injecting commands that ChatGPT then follows, leading to data exfiltration or phishing links.
The url_safe endpoint, designed to filter malicious URLs, proves unreliable, allowing attackers to embed harmful redirects.
The Seven Vulnerabilities: From Injection To Persistence
The flaws range from subtle to devastating. First, trusted sites enable indirect injections via blog comments, where summarizing an article triggers malicious code.
Second, a zero-click attack in search contexts lets indexed malicious sites inject prompts when users ask innocent questions, exploiting OpenAI’s crawler.
Third, a one-click vulnerability arises from ChatGPT’s URL parameter feature, auto-submitting prompts.
Fourth, attackers bypass url_safe using Bing’s whitelisted tracking links to exfiltrate data letter by letter.
Tenable introduced novel techniques like Conversation Injection, where SearchGPT embeds hidden prompts in responses that ChatGPT obeys.
A markdown rendering bug hides this content from users. Finally, Memory Injection creates persistence, forcing ongoing data leaks across sessions.
Proof-of-concept attacks demonstrate phishing via summaries, image-based exfiltration, and memory tampering, all succeeding on both models.
Implications and Vendor Response
These vulnerabilities could compromise any user querying ChatGPT, especially as AI search replaces traditional engines.
OpenAI has patched some issues following disclosure, but prompt injection remains inherent to LLMs.
Tenable urges vendors to fortify isolation layers and safety checks. As AI integrates deeper into daily life, such flaws underscore the risks of unchecked innovation, potentially leaking sensitive information on a massive scale.





