You want a Slack bot that handles customer support at work and a Telegram assistant that manages your personal tasks. Two agents, two personas, two completely different jobs. Setting up a multi agent workflow like this on bare metal means wrangling agentDir configs, writing binding rules by hand, and figuring out tool permissions in JSON files. On ClawHosters, you spin up two instances from a dashboard and skip all of that.
But maybe you actually want both agents on one server. Or maybe you need agents that talk to each other. Let's look at how OpenClaw handles this, and where each approach makes sense.
How OpenClaw Multi-Agent Actually Works
The Gateway is the core of any openclaw multi agent setup. A single Gateway process can host multiple agents through its agents.list configuration. Each agent gets its own workspace directory, its own conversation memory, its own auth context, and its own session store.
Bindings route incoming messages to the right agent. If you have a Slack binding pointed at agent A and a Telegram binding pointed at agent B, messages from each platform go to the correct agent automatically. The routing logic follows a "most specific wins" rule. A peer-level binding (specific user) beats a channel-level binding, which beats the default.
One thing that trips people up: never reuse an agentDir between two agents. I've seen this cause session collisions where Agent A suddenly starts answering with Agent B's personality. The memory stores overlap, auth tokens get confused, and you end up debugging for hours. Keep directories separate. Always.
Three Patterns for Running Multiple Agents
| Pattern | How It Works | Best For | Downside |
|---|---|---|---|
| Bindings-based | One Gateway, multiple agents, routing via bindings | Different personas on same server | Shared resources, config complexity |
| Sub-agents | Main agent spawns background workers mid-conversation | Parallel tasks within one workflow | Harder to debug, token costs add up |
| Separate instances | Each agent gets its own VPS | Maximum isolation, independent scaling | Higher cost per agent |
The bindings approach is probably what you want if you're running two or four agents that don't need to interact. Sub-agents are useful for things like code review pipelines where one agent writes code and another reviews it in the same conversation thread. Separate instances, which is what ClawHosters provides, give you infrastructure-level isolation instead of just config-level.
Lobster: Deterministic Pipelines That Actually Work
Here's where things get interesting. The Lobster workflow engine takes a different approach to ai agent orchestration. Instead of letting an LLM decide which agent to call next (which fails more than you'd think), Lobster uses YAML-defined step sequences.
You define a pipeline. Step one: programmer writes code. Step two: reviewer checks it. Step three: tester runs tests. Each step pipes JSON data to the next. If a step needs human approval, Lobster pauses with a resumeToken and waits.
One developer on the DEV Community documented running 12 concurrent agent sessions across four projects using this pattern. The key insight from that post: LLMs are unreliable routers. They hallucinate tool calls, skip steps, and get confused by complex branching. Deterministic pipelines let code handle the sequencing while LLMs do what they're actually good at, generating text and reasoning about problems.
You can also save 70-80% on token costs by using smarter model routing. Opus for the orchestrator that needs to reason about architecture, Haiku or Sonnet for the sub-agents doing simpler tasks like formatting or running checks. Community consensus sits around 5KB max for bootstrap files and a 200K token global budget per pipeline run.
Security: The Part Nobody Wants to Talk About
Running openclaw multiple agents means multiplying your attack surface. Each agent needs its own tool allow/deny lists. The deny list always wins, which is the right design choice.
Microsoft's security team published research calling this a "dual supply chain risk." Your agent trusts the LLM provider, and the LLM trusts the tools you gave it. If either link breaks, you have a problem. And honestly, GitHub issue #10004 on the OpenClaw repo lists five open isolation gaps that haven't been patched yet. Worth reading before you deploy anything sensitive.
This is where separate VPS instances help. On ClawHosters, each instance runs on its own server with its own firewall rules. A compromised agent can't reach another agent's data because there's no shared filesystem, no shared process space, nothing. Our security hardening guide covers the full picture.
The dashboard also handles auto-updates, including security patches. If you're self-hosting, you're responsible for pulling those yourself. For OpenClaw's 180,000+ GitHub stars worth of community contributions, that's a lot of commits to track.
If you want to cut token costs by 77% while running multi-agent setups, model routing is probably the single biggest lever. Check the documentation for setup details, or try a free trial to test your pipeline before committing.