Subs -10% SUB-10
OpenClaw v2026.3.1: Adaptive Thinking, WebSocket Transport, and Native K8s Probes
$ ./blog/news
News

OpenClaw v2026.3.1: Adaptive Thinking, WebSocket Transport, and Native K8s Probes

ClawHosters
ClawHosters by Daniel Samer
3 min read

76 contributors shipped OpenClaw v2026.3.1 on March 2, with 50+ issues closed and three changes that affect how your instance talks to AI providers.

If you're on ClawHosters, all of this is already live. You don't need to touch anything.

Claude 4.6 Gets Adaptive Thinking by Default

OpenClaw now sends thinking.type: "adaptive" when calling Claude Opus 4.6 or Sonnet 4.6. The old budget_tokens approach? Anthropic deprecated it for these models.

What adaptive thinking actually does: instead of burning a fixed token budget on every request, the model decides whether to reason deeply or skip thinking entirely. A simple factual question won't waste tokens on reasoning it doesn't need. A complex multi-step agent task gets deeper analysis automatically.

For other models, nothing changed. They keep their previous default of low. This only affects Claude 4.6 specifically.

OpenAI WebSocket Streaming Replaces HTTP

The OpenAI Responses API transport flipped from HTTP to WebSocket-first. SSE is still there as a fallback if the WebSocket connection fails.

Why this matters: every tool-call round trip used to spin up a full HTTP request. For agents running 20+ tool calls in a workflow, that overhead adds up fast. According to benchmarks cited in the implementation discussion, persistent WebSocket connections cut end-to-end execution time by roughly 40% for those heavy workflows.

The default is transport: auto, which tries WebSocket first. You probably don't need to configure anything.

Native Kubernetes Health Probes

OpenClaw v2026.3.1 ships four built-in endpoints: /health, /healthz, /ready, and /readyz. Before this, anyone running OpenClaw on Kubernetes had to use TCP socket probes or build custom sidecars. The community k8s-operator project has been working around this limitation for months.

If you already had a plugin handling those paths, it still works. The built-in endpoints only activate when no other handler claims the route.

For ClawHosters users, this is how the platform monitors your instance health behind the scenes.

Other Notable Changes

Feishu/Lark got meaningful Docx improvements: create_table, upload_image, and upload_file actions. The new OPENCLAW_SHELL environment variable lets shell startup scripts detect when they're running inside OpenClaw. And a secrets/auth normalization fix quietly resolves edge cases where credential profiles weren't persisting correctly.

Your managed instance on ClawHosters was updated automatically. If you want to check what's new in your dashboard or read more about self-hosted vs. managed differences, those resources are there. For the full changelog, see the v2026.3.1 release notes.

Frequently Asked Questions

No. Only Claude Opus 4.6 and Sonnet 4.6 default to adaptive thinking. Other models, including older Claude versions, keep their previous default of `low`. The change is model-specific.

You don't need to do anything. ClawHosters manages all updates automatically. Your instance is already running v2026.3.1 with adaptive thinking and the new health probes active.

For tool-call-heavy workflows with 20+ round trips, benchmarks show roughly 40% faster end-to-end execution. For simple chat conversations with few tool calls, the difference is minimal. The default `transport: auto` setting handles the switch for you.
*Last updated: March 2026*

Sources

  1. 1 OpenClaw v2026.3.1
  2. 2 Anthropic deprecated it
  3. 3 cited in the implementation discussion
  4. 4 k8s-operator project
  5. 5 ClawHosters
  6. 6 what's new in your dashboard