Subs -30% SUB30
Snyk Audit Finds 7% of ClawHub Skills Leak API Keys and PII in Plaintext
$ ./blog/news
News

Snyk Audit Finds 7% of ClawHub Skills Leak API Keys and PII in Plaintext

ClawHosters
ClawHosters by Daniel Samer
3 min read

Snyk engineers Luca Beurer-Kellner and Hemang Sarkar scanned 3,984 skills on the ClawHub marketplace. 283 of them, roughly 7.1%, contain credential-exposure flaws that push API keys, passwords, and even credit card numbers through the LLM context window in plaintext.

Published February 5, 2026, the findings are uncomfortable. These aren't malicious skills. They're legitimate tools built by developers who probably forgot one thing: everything your agent processes passes through the model.

Four Ways Skills Leak Your Credentials

The researchers identified four distinct exposure vectors.

Verbatim output trap. A skill called moltyverse-email instructs the agent to output API keys word-for-word in its response. Anyone reading the conversation, or any logging middleware in the chain, captures the key.

Financial data exposure. buy-anything collects credit card numbers and passes them as arguments to curl commands. The full card number sits in the LLM context, in tool call logs, and potentially in server-side request logs.

Log exfiltration. prompt-log extracts session files without redacting sensitive content. Authentication tokens, personal data, whatever the session held gets pulled into the context window.

Plaintext storage. prediction-markets-roarin writes API keys directly into memory files the agent can read later. No encryption, no access controls.

Not ClawHavoc. Something Worse.

This is different from the ClawHavoc research that flagged intentionally malicious skills. Those were attack tools designed to cause harm. What Snyk found is subtler: well-meaning developers treating AI agents like local scripts.

When you run a Python script on your laptop, passing an API key as a variable is fine. When you do the same thing through an LLM, that key travels through the model's context window, gets included in logs, and may persist in conversation history. The threat model is fundamentally different. Most skill authors, from what I can tell, haven't internalized that yet.

What You Can Do

Snyk recommends mcp-scan, a free Python tool that analyzes SKILL.md files for credential-exposure patterns. It catches the four vectors above and flags skills before you install them.

If you run a self-hosted OpenClaw instance, you should probably audit every marketplace skill you've added. Our safety scanner guide walks through the process.

For ClawHosters customers, managed instances run curated skill sets that go through vetting before deployment. You're not pulling random skills off the marketplace.

Frequently Asked Questions

Snyk found 283 out of 3,984 scanned skills (7.1%) contain credential-exposure flaws. These are unintentional leaks in legitimate skills, not malicious tools.

The tool is designed to catch verbatim output traps, financial data exposure, log exfiltration, and plaintext storage patterns in SKILL.md files. It won't catch every edge case, but it handles the patterns Snyk documented.

No. ClawHosters instances run pre-approved, vetted skill sets. Skills from the open ClawHub marketplace are not installed on managed instances unless they pass review first.
*Last updated: March 2026*

Sources

  1. 1 ClawHavoc
  2. 2 mcp-scan
  3. 3 safety scanner guide
  4. 4 ClawHosters