Loading...
Custom LLM Providers
Using Custom LLM Providers
The AI Setup tab lists common providers like Anthropic, OpenAI, Google, and DeepSeek. But OpenClaw supports any provider that offers an OpenAI-compatible API. This includes services like Featherless AI, Together AI, Fireworks AI, Anyscale, local Ollama instances, and anything else that speaks the OpenAI chat completions format.
You set this up through the Config Editor. No support ticket needed.
What You Need
Before you start, grab these from your provider:
- Base URL (the API endpoint, usually ending in
/v1) - API key (from your provider's dashboard)
- Model ID (the exact model identifier your provider uses)
If you're not sure about the base URL or model ID, check your provider's documentation. Most providers list these on their quickstart page.
Step-by-Step Setup
1. Open the Config Editor
Go to your instance dashboard, then Settings > Config Editor. You'll see a disclaimer the first time. Accept it to proceed.
2. Find the models.providers section
In the JSON config, look for the models key, then providers inside it. You'll see clawhosters already listed there (that's the included AI). You're adding a new entry next to it.
3. Add your provider block
Add a new key under providers with your provider's name. Here's the structure:
"featherless": {
"baseUrl": "https://api.featherless.ai/v1",
"apiKey": "your-api-key-here",
"api": "openai-completions",
"models": [
{
"id": "meta-llama/Llama-3.3-70B-Instruct",
"name": "Llama 3.3 70B",
"input": ["text"],
"contextWindow": 131072,
"maxTokens": 8192
}
]
}
The key fields:
| Field | What it does |
|---|---|
baseUrl |
Your provider's API endpoint. Must end with /v1 for most providers. |
apiKey |
Your API key. Stored in the config on your instance, never sent to ClawHosters. |
api |
Set this to "openai-completions" for any OpenAI-compatible provider. |
models |
List of models you want to use. Each needs an id that matches what your provider expects. |
4. Save the config
Click Save. OpenClaw hot-reloads the config within a few seconds. No restart needed.
5. Set your primary model
Go to the AI Setup tab. In the model dropdown, you'll now see your custom models listed as featherless/meta-llama/Llama-3.3-70B-Instruct (provider name / model ID). Select it as your primary model.
Provider Examples
Here are working configs for popular alternative providers. Replace the API key with your own.
Featherless AI
Serverless inference with 6,000+ open-source models. Flat-rate pricing.
"featherless": {
"baseUrl": "https://api.featherless.ai/v1",
"apiKey": "your-featherless-key",
"api": "openai-completions",
"models": [
{
"id": "meta-llama/Llama-3.3-70B-Instruct",
"name": "Llama 3.3 70B",
"input": ["text"],
"contextWindow": 131072,
"maxTokens": 8192
},
{
"id": "Qwen/Qwen2.5-72B-Instruct",
"name": "Qwen 2.5 72B",
"input": ["text"],
"contextWindow": 131072,
"maxTokens": 8192
}
]
}
Together AI
Fast inference for popular open-source models. Pay per token.
"together": {
"baseUrl": "https://api.together.xyz/v1",
"apiKey": "your-together-key",
"api": "openai-completions",
"models": [
{
"id": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
"name": "Llama 3.3 70B Turbo",
"input": ["text"],
"contextWindow": 131072,
"maxTokens": 8192
}
]
}
Local Ollama (via ZeroTier)
If you're running Ollama on your own machine and have ZeroTier set up, you can point your instance at it. See the ZeroTier Home LLM guide for the network setup.
"ollama": {
"baseUrl": "http://YOUR_ZEROTIER_IP:11434/v1",
"apiKey": "ollama",
"api": "openai-completions",
"models": [
{
"id": "llama3.1:8b",
"name": "Llama 3.1 8B (Local)",
"input": ["text"],
"contextWindow": 131072,
"maxTokens": 8192
}
]
}
Model Configuration Details
Each model entry supports these fields:
| Field | Required | Description |
|---|---|---|
id |
Yes | The model identifier your provider expects. Check their docs for exact names. |
name |
Yes | Display name shown in the model dropdown. Can be anything you want. |
input |
Yes | Set to ["text"] for text-only models. Add "image" for vision models. |
contextWindow |
No | Maximum context length in tokens. Defaults vary by model. |
maxTokens |
No | Maximum output tokens per response. |
reasoning |
No | Set to true for reasoning models (DeepSeek R1, QwQ, etc.) |
Using Multiple Providers
You can have as many providers configured as you want. The included AI (ClawHosters managed models) stays active alongside your custom providers. This gives you fallback options.
In the AI Setup tab, you can:
- Set any model from any provider as your primary
- Configure fallback models that kick in if your primary is unavailable
- Assign specific models to specific tasks (vision, PDF processing, etc.)
If your custom provider goes down, your instance falls back to the next model in the chain.
Troubleshooting
Model not showing in the dropdown
After saving the config, go to AI Setup and refresh the page. The model dropdown pulls from the live config. If it still doesn't show, double-check your JSON syntax in the Config Editor. A missing comma or bracket breaks the whole config.
API errors or "model not found"
The model ID must match exactly what your provider expects. Some providers use org/model-name format, others just use model-name. Check your provider's API documentation for the correct format.
Slow responses
Response time depends on your provider. ClawHosters infrastructure is in Germany (Hetzner, Falkenstein). Providers with servers in the US or Asia will have higher latency compared to EU-based providers.
Key not working
Make sure your API key is active and has the right permissions. Most providers require at least chat completions access. Some providers have separate keys for different endpoints.
Related Docs
- LLM Add-on (BYOK vs Managed): Overview of all LLM options
- Use Your Own LLM at Home: Connect a local model via ZeroTier
- Instance Overview: What runs inside your instance
Related Documentation
LLM Add-on (BYOK vs Managed)
How LLM Works on ClawHosters Every OpenClaw instance can use a large language model for conversa...
What is OpenClaw?
An Open-Source AI Assistant You Can Self-Host OpenClaw is an open-source framework for running y...
Instance Settings and Configuration
Overview After creating an instance, you can configure its AI model, messaging channels, web acc...