OpenClaw custom skills are Markdown files. Not compiled binaries, not API wrappers, not Docker containers. Just a folder with a file called SKILL.md inside it. The agent reads your instructions at runtime and follows them.
The syntax takes about five minutes to learn. But there's one concept that trips up almost everyone, and it's the reason most homegrown skills never activate.
What a Skill Actually Is
A skill lives in ~/.openclaw/skills/your-skill-name/SKILL.md. That's it. YAML frontmatter at the top for metadata, Markdown below for instructions. When OpenClaw starts a new session, it scans every skill folder, reads the frontmatter, and decides which skills are relevant.
ClawHub (OpenClaw's public registry) hosts over 13,000 community skills as of March 2026. You can install those, build your own, or both.
The Part Most Tutorials Get Wrong
Every guide explains what the description field does. Few explain how OpenClaw actually uses it.
As LumaDock's technical guide puts it: "The frontmatter description is not marketing copy. It's closer to a trigger phrase." OpenClaw reads the name and description to decide whether to pull your skill's full instructions into context. Only after that match does the agent read what's below.
This means your description should use the exact words a user would type.
"Summarize errors from a service log" activates. "Comprehensive log analysis utility" probably doesn't.
I've seen well-written skills sit unused for weeks because the description was too abstract. Write it like you're finishing the sentence "Hey OpenClaw, can you..."
SKILL.md Anatomy
Here's a working example. A log triage skill that shows every required component:
---
name: log-triage
description: Summarize errors from a service log in a time window.
user-invocable: true
metadata: {"clawdbot":{"emoji":"🔍","requires":{"bins":["bash","date"]}}}
---
Below the frontmatter, structure your instructions like a runbook:
# Log triage
## What it does
Reads service logs, groups repeated errors, and returns a summary.
## Inputs needed
- Service name (required)
- Time window (default: last 1 hour)
## Workflow
1. Ask for service name and time window if not provided
2. Fetch logs via journalctl or docker logs
3. Group repeated error patterns
4. Show the exact command used for each step
## Output format
Markdown table: error pattern, count, first/last occurrence, sample line.
## Guardrails
- Never restart services or modify config files
- If logs are empty, say so and suggest what to check next
- Do not fabricate log entries
## Failure handling
If a command fails, include the command text and error output.
The guardrails section matters more than you'd think. Without it, the agent guesses when data is missing. And guessing produces unreliable results.
Two frontmatter details worth calling out. The user-invocable: true flag makes your skill available as a slash command. And the metadata block must be a single-line JSON object. Multi-line YAML in that field causes silent parse failures, according to the official OpenClaw documentation. That one has burned people.
Testing Your Skill
Run openclaw skills list --eligible to confirm OpenClaw sees your skill and it passes all gating checks (requires.bins, OS restrictions, etc.). If your skill doesn't show up, this command tells you why.
One thing to know: skills snapshot at session start. You need to open a new session after editing SKILL.md to pick up changes. If you want hot-reload during development, enable the file watcher in your config (skills.load.watch: true).
Publishing to ClawHub
Four steps:
- Run
clawhub auth login - Set your metadata namespace to
metadata.clawdbot, notmetadata.openclaw. The official docs sometimes showmetadata.openclawin examples, but ClawHub's validator only acceptsclawdbot. A community contributor's 13-point checklist documents this gotcha from six failed iterations. - Run
clawhub publish ./your-skill-folder - Wait for VirusTotal scanning (every ClawHub submission since February 2026 gets scanned automatically)
Keep in mind: VirusTotal scans executable artifacts. It does not analyze natural language instructions inside SKILL.md for prompt injection. Reviewers and users still need to read the source.
Security Checklist
Four rules that will save you headaches:
Never hardcode API keys or tokens. Use
requires.envin metadata and SecretRef providers (source: env,source: file, orsource: exec) to inject credentials safely.Add
set -euo pipefailto any shell scripts your skill references. After the ClawHavoc campaign hit ClawHub with over 1,000 malicious skills, the community treats missing script hardening as a red flag.Declare minimum permissions. Over-permissioned skills get flagged by reviewers and reported by users (three reports auto-hides a skill).
Document every external endpoint your skill contacts. Users who can't verify where data goes won't install your skill.
Run Custom Skills on ClawHosters
If you're using a ClawHosters managed instance, you can upload custom skills directly through the dashboard. No CLI setup needed. Your instance picks them up on the next session. For the full walkthrough, check the skills and plugins docs.