AWS published an official CloudFormation sample for running OpenClaw on their infrastructure. It uses Amazon Bedrock for AI inference, IAM roles instead of API keys, and VPC private networking to keep traffic off the public internet. If you're already running workloads on AWS, this is probably the cleanest way to deploy OpenClaw on AWS inside your existing stack.
But it comes with trade-offs that most guides don't mention. Here's what you're actually getting into.
What the CloudFormation Template Does
You click a button in the AWS Console, pick one of four supported regions (us-west-2, us-east-1, eu-west-1, ap-northeast-1), and wait about eight minutes. CloudFormation spins up a t4g.medium Graviton ARM instance running OpenClaw, creates VPC endpoints for private Bedrock access, configures IAM roles for credential-free authentication, and enables CloudTrail logging.
The architecture is straightforward. Your messaging platform (Telegram, WhatsApp, Slack) connects to the EC2 instance. OpenClaw processes the request and calls Bedrock through a private VPC endpoint. No API keys get stored on the instance. The IAM role attached to the EC2 instance handles authentication automatically through the AWS credential chain.
One quirk worth knowing: you'll need to set AWS_PROFILE=default even though the IAM role does the actual authentication. OpenClaw's Bedrock integration doesn't read the EC2 metadata service directly.
Available Models and the Inference Profile Problem
Through Bedrock, you get access to Amazon Nova (the default), Anthropic Claude, Meta Llama, and DeepSeek. The cost differences are significant. Nova Lite runs roughly 90% cheaper per token than Claude Sonnet. Claude Sonnet 4.5 on Bedrock costs $3 per million input tokens and $15 per million output tokens at the global endpoint.
Here's the part nobody talks about.
OpenClaw discovers available models using Bedrock's ListFoundationModels API call. That call doesn't return inference profiles. And newer models, including Claude Opus 4.6 and Nova 2 Lite, are only available through inference profile IDs. So OpenClaw's auto-discovery can't find them.
This is tracked in GitHub issue #14566, which was closed as "Not Planned." If you want Claude Opus 4.6, you'll need to manually configure the inference profile ID global.anthropic.claude-opus-4-6-v1 in your OpenClaw settings. No auto-discovery, no dropdown menu. You need to know the exact ID.
I think most people won't hit this on day one if they're fine with Nova or Sonnet. But if you specifically need the latest Claude model, expect to dig into configuration files.
What It Actually Costs
Most guides quote the EC2 instance price and stop there. The full picture looks different.
| Component | Monthly Cost |
|---|---|
| EC2 t4g.medium (on-demand) | $24.53 |
| VPC endpoints (2 endpoints, 1 AZ) | $14.60 |
| CloudWatch logs | ~$0.50/GB |
| Fixed infrastructure total | ~$40-55/mo |
| Bedrock tokens (Claude Sonnet 4.5) | $3/$15 per 1M input/output |
That's $40 to $55 per month before you send a single message. Token costs add up on top of that depending on your usage.
For comparison, ClawHosters plans start at $19/mo with no infrastructure to manage.
The Security Story
This is where the AWS approach genuinely shines, and I'm not being sarcastic. The security model is strong.
No API keys stored anywhere. IAM roles auto-rotate credentials without human intervention. VPC endpoints mean your AI traffic never touches the public internet. CloudTrail creates immutable audit logs of every Bedrock invocation, which is the kind of thing compliance teams actually care about. The Cloudvisor security guide covers the baseline: SSM-only access (no open SSH ports), IMDSv2 enforced, encrypted EBS volumes.
If you're in a regulated industry where you need to prove that AI inference data stays within your AWS account boundary, this matters. It probably matters more than the cost difference.
Who Should Actually Do This
Be honest with yourself about whether you're in the target audience.
This makes sense if you already run production workloads on AWS, have an ops team familiar with CloudFormation and IAM, need compliance-grade audit trails, or want to consolidate AI billing under your existing AWS account.
This doesn't make sense if you are setting up your first OpenClaw instance, don't have existing AWS infrastructure, want to avoid managing servers entirely, or just need a working AI agent without the operational overhead.
For the second group, a managed option like ClawHosters exists specifically so you don't have to think about VPC endpoints and inference profile IDs. Our self-hosted vs managed comparison breaks down the trade-offs in more detail.
There's also AWS AgentCore, a serverless alternative that runs OpenClaw in ephemeral containers. It eliminates the always-on EC2 cost. But it has a 2 to 4 minute cold start, which makes it impractical for real-time chat.