Friday, April 24, 2026

AI

PiGPT Tool Converts Your Raspberry Pi Into A ChatGPT-Driven Smart System

noBGP has launched pi GPT, a custom ChatGPT tool that lets developers control Raspberry Pi devices directly via natural-language prompts, eliminating the need for complex setups for local AI-driven coding and deployment. Announced on November 18, 2025, this innovation uses noBGP's deterministic networking to...

Hackers Can Leverage Default ServiceNow AI Assistant Settings To Carry Out Prompt Injection Attacks

Earlier this year, cybersecurity researcher Aaron Costello uncovered a critical flaw in ServiceNow's Now Assist AI platform that enables hackers to perform second-order prompt-injection attacks. These attacks exploit default settings, allowing malicious actors to trick AI agents into executing unauthorized actions, such as reading...

Microsoft Unveils AI-Enhanced Azure Firewall via Security Copilot Integration

Microsoft has launched a new integration between Azure Firewall and Security Copilot, using generative AI to streamline threat investigations for cloud security teams. This enhancement allows analysts to query malicious traffic data in natural language, reducing the need for complex manual searches. By combining...

EchoGram Attack Demonstrates How Major AI Models Can Be Manipulated To Approve Malicious Inputs

Large language models like GPT-4, Claude, and Gemini rely on safety guardrails to block harmful prompts, but a new technique called EchoGram can trick these defenses into approving dangerous inputs. Developed by researchers at HiddenLayer in early 2025, EchoGram exploits weaknesses in how guardrails...

Chinese Threat Actors Leveraged Claude Code AI Capabilities To Compromise Large Technology Enterprises

In a groundbreaking revelation, Anthropic disclosed on November 13, 2025, that it disrupted the first known AI-driven cyber espionage campaign, in which Chinese state-sponsored hackers used the company's Claude Code AI to breach major organizations. The operation, detected in mid-September 2025, targeted around 30...

Critical NVIDIA NeMo Vulnerability Opens Door To Code Injection and Privilege Escalation

NVIDIA, a leader in AI computing, has disclosed two high-severity vulnerabilities in its NeMo Framework, an open-source toolkit for building generative AI models. Released on November 7, 2025, the security bulletin urges users to update to version 2.5.0 or later to patch flaws that...