Snyk engineers Luca Beurer-Kellner and Hemang Sarkar scanned 3,984 skills on the ClawHub marketplace. 283 of them, roughly 7.1%, contain credential-exposure flaws that push API keys, passwords, and even credit card numbers through the LLM context window in plaintext.
Published February 5, 2026, the findings are uncomfortable. These aren't malicious skills. They're legitimate tools built by developers who probably forgot one thing: everything your agent processes passes through the model.
Four Ways Skills Leak Your Credentials
The researchers identified four distinct exposure vectors.
Verbatim output trap. A skill called moltyverse-email instructs the agent to output API keys word-for-word in its response. Anyone reading the conversation, or any logging middleware in the chain, captures the key.
Financial data exposure. buy-anything collects credit card numbers and passes them as arguments to curl commands. The full card number sits in the LLM context, in tool call logs, and potentially in server-side request logs.
Log exfiltration. prompt-log extracts session files without redacting sensitive content. Authentication tokens, personal data, whatever the session held gets pulled into the context window.
Plaintext storage. prediction-markets-roarin writes API keys directly into memory files the agent can read later. No encryption, no access controls.
Not ClawHavoc. Something Worse.
This is different from the ClawHavoc research that flagged intentionally malicious skills. Those were attack tools designed to cause harm. What Snyk found is subtler: well-meaning developers treating AI agents like local scripts.
When you run a Python script on your laptop, passing an API key as a variable is fine. When you do the same thing through an LLM, that key travels through the model's context window, gets included in logs, and may persist in conversation history. The threat model is fundamentally different. Most skill authors, from what I can tell, haven't internalized that yet.
What You Can Do
Snyk recommends mcp-scan, a free Python tool that analyzes SKILL.md files for credential-exposure patterns. It catches the four vectors above and flags skills before you install them.
If you run a self-hosted OpenClaw instance, you should probably audit every marketplace skill you've added. Our safety scanner guide walks through the process.
For ClawHosters customers, managed instances run curated skill sets that go through vetting before deployment. You're not pulling random skills off the marketplace.