A chip manufacturer just created a consumer product category for AI agents. That hasn't happened before.
On March 13, AMD published an official guide for running OpenClaw locally on AMD hardware. Two branded configurations. Dedicated product pages. The whole playbook.
RyzenClaw vs RadeonClaw
Here's what AMD is actually selling.
| Spec | RyzenClaw | RadeonClaw |
|---|---|---|
| Hardware | Ryzen AI Max+ (APU) | Radeon AI PRO R9700 (GPU) |
| Memory | 128GB unified | Dedicated VRAM |
| Speed (Qwen 3.5 35B) | ~45 tokens/sec | ~120 tokens/sec |
| Context window | 260K tokens | Standard |
| Concurrent agents | Up to 6 | Not specified |
| 10K input tokens | ~9.3 sec | ~4.4 sec |
| Use case | Agent swarms, long context | Raw speed |
The RyzenClaw configuration is probably the more interesting one. 128GB of unified memory means you can run larger models without a separate GPU, and AMD recommends allocating roughly 96GB to variable graphics. Six concurrent agents on a single chip is the kind of spec that makes agent swarm setups possible on a desktop.
RadeonClaw is the speed play. 120 tokens per second on Qwen 3.5 35B is fast. Really fast. Processing 10,000 input tokens in 4.4 seconds makes it viable for production-style workloads from a single GPU.
AMD vs NVIDIA: The Hardware War Is Real
This is AMD's direct answer to NVIDIA's DGX Spark announcement. Both companies now have branded OpenClaw hardware configurations with dedicated marketing pages. AMD even calls them "Agent Computers", which is honestly a better brand name than anything NVIDIA came up with.
The race for local AI agent hardware is no longer theoretical. Two of the world's biggest chip companies are competing for it.
And there's a third option AMD announced alongside the hardware. The AMD Developer Cloud offers free vLLM-powered OpenClaw inference on AMD cloud infrastructure. No hardware purchase required. It's clearly a gateway drug to get developers building on AMD silicon before they commit to buying it.
What This Means If You Just Want a Running Agent
Local hardware is exciting. But exciting comes with caveats.
A Ryzen AI Max+ processor costs somewhere north of $2,000. You still need to build or buy a machine around it. That machine draws power 24/7 if you want your agent always available. It sleeps when your PC sleeps. And if something breaks, you're the IT department.
That's exactly the gap managed hosting fills. Your OpenClaw instance runs on Hetzner servers in Germany, always on, with auto-updates and backups handled for you. No $2,000 processor purchase. No power bill surprise. If you're curious about the real cost breakdown, we did a detailed self-hosting vs managed comparison.
The AMD vs NVIDIA hardware race proves one thing clearly: local AI agents are going mainstream. How you run yours is a different question.