Your bot shows as online. You send it a message. Nothing comes back. You check again. Still nothing.
If your AI agent not working is the reason you're here at 2am, you're in the right place. Most OpenClaw problems fall into a handful of patterns, and they're almost always fixable in under five minutes. Here's the diagnostic ladder that gets you unstuck fast.
Start Here: The Diagnostic Commands
Before you touch anything else, run these in order:
openclaw status
openclaw gateway status
openclaw logs --follow
openclaw doctor
openclaw doctor catches about 78% of issues on its own, in my experience. If it spits out a fix suggestion, try that first. If not, keep reading.
Bot Online But No Replies
This is the single most common problem. Your bot appears online in Telegram (or whatever messenger you use), but it ignores every message.
Nine times out of ten, it's a pairing issue. OpenClaw requires explicit approval before it responds to a user. Check your logs:
openclaw logs --follow
Look for lines that say drop dm (pairing required). If you see those, your bot is working fine. It's just waiting for you to approve the conversation.
openclaw pairing pending
That shows all pending pairing requests. Approve them and messages start flowing.
But if pairing isn't the problem, check your dmPolicy setting. If it's set to something restrictive, the bot will silently drop messages without logging an obvious error. Also verify that group privacy mode isn't blocking the bot from seeing messages in group chats.
Gateway Refuses to Start
You run openclaw gateway start and it either errors out or hangs. This usually comes down to one of two things.
Port conflict. Something else is already using port 9090 or 18789:
ss -tlnp | grep -E '9090|18789'
If another process owns those ports, kill it or reconfigure.
Stale PID file. The gateway thinks it's already running:
rm ~/.openclaw/gateway.pid
openclaw gateway start
If neither of those works, you probably have a config schema error from a recent update. Run openclaw doctor --fix and it will attempt to migrate your config to the current schema.
401 API Key Errors
Your logs are full of 401 Unauthorized from the LLM provider. This means your API key is wrong, revoked, or being overridden.
First, check which key OpenClaw is actually using. A common gotcha: you set the key in the config file, but a stale shell environment variable like ANTHROPIC_API_KEY is overriding it. Run env | grep -i anthropic (or whatever provider you use) to check.
If the environment looks clean, re-authenticate:
openclaw models auth login --provider anthropic
And verify the key prefix matches what your provider expects. Anthropic keys start with sk-ant-, OpenAI with sk-. Sounds obvious, but I've seen people paste keys from the wrong provider more often than you'd think.
429 Rate Limit Errors
This one trips people up because it looks like a key problem. It's not. A 429 means you've exceeded your provider's requests-per-minute or tokens-per-minute quota.
Quick fixes, ranked by effort:
- Disable
context1mif you're using it. That feature sends a lot of tokens per request. - Add a fallback model in your config. If Claude is rate-limited, traffic spills to a cheaper model.
- Upgrade your API tier with the provider.
If you're running into 429s regularly, our token cost optimization guide covers strategies for keeping usage (and spend) under control.
Docker Container Keeps Restarting
After an update, your container enters a restart loop. The logs show config-related errors.
This almost always means the new version introduced breaking config changes. Run:
openclaw doctor
It will tell you exactly which config fields changed. You might need to rename a key, remove a deprecated setting, or restructure a section. The doctor command handles most of this automatically if you pass --fix.
One version-specific heads up: upgrading from v2026.4.24 to v2026.4.29+ caused CPU saturation and RPC latency spikes (up to 144 seconds in some cases). If you hit that, restore your config and state directories from backup before upgrading again.
Slow Response Times
Your bot replies, but it takes 15, 20, sometimes 30 seconds. The question is whether the bottleneck is the LLM provider or your gateway.
openclaw diagnostics
This breaks down latency by component. If the LLM response time dominates, that's your provider being slow and there's not much you can do except switch models. If the gateway overhead is high, you might have a network issue between your server and the API endpoint, or your server is underpowered.
A Note on Managed Hosting
If you're running OpenClaw on ClawHosters, Docker restarts, port conflicts, and config migration headaches don't apply to you. That's handled automatically. The problems you'll actually encounter are API key configuration, channel pairing, and model selection. For setup details, check the setup guide.
For security best practices regardless of how you host, the security hardening guide covers what matters.