Subs -30% SUB30
How to Build Custom OpenClaw Skills: From SKILL.md to ClawHub
$ ./blog/guides
Guides

How to Build Custom OpenClaw Skills: From SKILL.md to ClawHub

ClawHosters
ClawHosters by Daniel Samer
5 min read

OpenClaw custom skills are Markdown files. Not compiled binaries, not API wrappers, not Docker containers. Just a folder with a file called SKILL.md inside it. The agent reads your instructions at runtime and follows them.

The syntax takes about five minutes to learn. But there's one concept that trips up almost everyone, and it's the reason most homegrown skills never activate.

What a Skill Actually Is

A skill lives in ~/.openclaw/skills/your-skill-name/SKILL.md. That's it. YAML frontmatter at the top for metadata, Markdown below for instructions. When OpenClaw starts a new session, it scans every skill folder, reads the frontmatter, and decides which skills are relevant.

ClawHub (OpenClaw's public registry) hosts over 13,000 community skills as of March 2026. You can install those, build your own, or both.

The Part Most Tutorials Get Wrong

Every guide explains what the description field does. Few explain how OpenClaw actually uses it.

As LumaDock's technical guide puts it: "The frontmatter description is not marketing copy. It's closer to a trigger phrase." OpenClaw reads the name and description to decide whether to pull your skill's full instructions into context. Only after that match does the agent read what's below.

This means your description should use the exact words a user would type.

"Summarize errors from a service log" activates. "Comprehensive log analysis utility" probably doesn't.

I've seen well-written skills sit unused for weeks because the description was too abstract. Write it like you're finishing the sentence "Hey OpenClaw, can you..."

SKILL.md Anatomy

Here's a working example. A log triage skill that shows every required component:

---
name: log-triage
description: Summarize errors from a service log in a time window.
user-invocable: true
metadata: {"clawdbot":{"emoji":"🔍","requires":{"bins":["bash","date"]}}}
---

Below the frontmatter, structure your instructions like a runbook:

# Log triage

## What it does
Reads service logs, groups repeated errors, and returns a summary.

## Inputs needed

- Service name (required)

- Time window (default: last 1 hour)

## Workflow
1. Ask for service name and time window if not provided
2. Fetch logs via journalctl or docker logs
3. Group repeated error patterns
4. Show the exact command used for each step

## Output format
Markdown table: error pattern, count, first/last occurrence, sample line.

## Guardrails

- Never restart services or modify config files

- If logs are empty, say so and suggest what to check next

- Do not fabricate log entries

## Failure handling
If a command fails, include the command text and error output.

The guardrails section matters more than you'd think. Without it, the agent guesses when data is missing. And guessing produces unreliable results.

Two frontmatter details worth calling out. The user-invocable: true flag makes your skill available as a slash command. And the metadata block must be a single-line JSON object. Multi-line YAML in that field causes silent parse failures, according to the official OpenClaw documentation. That one has burned people.

Testing Your Skill

Run openclaw skills list --eligible to confirm OpenClaw sees your skill and it passes all gating checks (requires.bins, OS restrictions, etc.). If your skill doesn't show up, this command tells you why.

One thing to know: skills snapshot at session start. You need to open a new session after editing SKILL.md to pick up changes. If you want hot-reload during development, enable the file watcher in your config (skills.load.watch: true).

Publishing to ClawHub

Four steps:

  1. Run clawhub auth login
  2. Set your metadata namespace to metadata.clawdbot, not metadata.openclaw. The official docs sometimes show metadata.openclaw in examples, but ClawHub's validator only accepts clawdbot. A community contributor's 13-point checklist documents this gotcha from six failed iterations.
  3. Run clawhub publish ./your-skill-folder
  4. Wait for VirusTotal scanning (every ClawHub submission since February 2026 gets scanned automatically)

Keep in mind: VirusTotal scans executable artifacts. It does not analyze natural language instructions inside SKILL.md for prompt injection. Reviewers and users still need to read the source.

Security Checklist

Four rules that will save you headaches:

  • Never hardcode API keys or tokens. Use requires.env in metadata and SecretRef providers (source: env, source: file, or source: exec) to inject credentials safely.

  • Add set -euo pipefail to any shell scripts your skill references. After the ClawHavoc campaign hit ClawHub with over 1,000 malicious skills, the community treats missing script hardening as a red flag.

  • Declare minimum permissions. Over-permissioned skills get flagged by reviewers and reported by users (three reports auto-hides a skill).

  • Document every external endpoint your skill contacts. Users who can't verify where data goes won't install your skill.

Run Custom Skills on ClawHosters

If you're using a ClawHosters managed instance, you can upload custom skills directly through the dashboard. No CLI setup needed. Your instance picks them up on the next session. For the full walkthrough, check the skills and plugins docs.

Frequently Asked Questions

On a ClawHosters managed instance, you upload SKILL.md through the web dashboard. The instance detects new skills automatically. For self-hosted setups, you place the file in `~/.openclaw/skills/your-skill-name/` and start a new session.

The most common cause is a vague `description` field. OpenClaw matches skills based on name plus description before loading full instructions. Rewrite your description using the exact words users would type. Also run `openclaw skills list --eligible` to check for gating failures.

ClawHub scans every submission with VirusTotal since February 2026. But scanning only covers executable files, not the SKILL.md instructions themselves. Read the security hardening guide for a complete overview of what to verify before installing community skills.

For local use, either namespace works. For ClawHub publishing, you must use `metadata.clawdbot`. The ClawHub validator only parses that namespace. Skills published with `metadata.openclaw` will fail validation silently.
*Last updated: March 2026*

Sources

  1. 1 over 13,000 community skills
  2. 2 LumaDock's technical guide
  3. 3 official OpenClaw documentation
  4. 4 community contributor's 13-point checklist
  5. 5 SecretRef providers
  6. 6 ClawHosters managed instance
  7. 7 skills and plugins docs
  8. 8 ClawHosters managed instance
  9. 9 security hardening guide