Thursday, March 5, 2026

65% Of Top AI Firms Expose Verified Secrets On GitHub, Including Keys And Tokens

The study targeted 50 prominent AI companies from the Forbes AI 50 list, excluding those without a GitHub presence. Shockingly, 65% nearly two-thirds showed verified secret leaks.

These include API keys, tokens, and credentials for platforms like Perplexity, Weights & Biases, Groq, and NVIDIA, often hidden in less obvious spots such as deleted forks, gists, commit histories, and personal developer repos.

Traditional scanners miss much of this exposure because they focus on surface-level public organizations.

Wiz’s methodology went deeper, expanding across three key dimensions: depth (probing commit histories, forks, and workflow logs), perimeter (scanning organization members’ personal repos via followers, metadata correlations, and contributor networks like Hugging Face), and coverage (detecting AI-specific secret types overlooked by standard tools, such as LangChain’s enterprise keys or ElevenLabs tokens).

The findings paint a stark picture. Companies with leaks collectively boast valuations exceeding $400 billion, proving that even giants aren’t immune.

Smaller footprints weren’t spared a firm with zero public repos and just 14 members still had exposures.

Secret types mirrored broader trends, with AI-related ones like Hugging Face tokens dominating, potentially granting access to private models and training data.

Disclosures highlighted mixed responses: some, like LangChain (where keys exposed org management permissions) and ElevenLabs, fixed issues quickly. Others went unanswered, with nearly half lacking proper channels.

One undisclosed case involved a token unlocking 1,000 private models, and another involved Weights & Biases keys revealing employee-linked training data.

For AI teams racing to build the future, these leaks aren’t just oversights they’re attack vectors.

The report stresses immediate actions: mandate public VCS secret scanning, establish robust disclosure processes, and customize detection for proprietary AI tokens.

Treat employees’ personal accounts as extensions of your attack surface, and enforce policies like MFA and segregated accounts during onboarding.

Ultimately, while no AI50 company was entirely leak-free, solid practices can mitigate risks.

As AI evolves, so must security adopting a “depth, perimeter, and coverage” approach ensures speed doesn’t sacrifice safety.

For cybersecurity pros tracking threats, this is a reminder: the real dangers hide below the surface.

Varshini
Varshini
Varshini is a Cyber Security expert in Threat Analysis, Vulnerability Assessment, and Research. Passionate about staying ahead of emerging Threats and Technologies..

Recent News

Recent News