Subs -30% SUB30
OpenClaw v2026.3.7: ContextEngine Plugins, GPT-5.4, and Persistent Bindings
$ ./blog/news
News

OpenClaw v2026.3.7: ContextEngine Plugins, GPT-5.4, and Persistent Bindings

ClawHosters
ClawHosters by Daniel Samer
4 min read

OpenClaw v2026.3.7 shipped on March 8 with 89 commits, over 200 bug fixes, and three features that genuinely change how the platform works. Here's what you need to know.

ContextEngine: Pluggable Memory for AI Agents

This one's been requested for about six months. Until now, OpenClaw's context management (how it decides what an agent remembers and what gets discarded) was hardcoded. You couldn't swap it out or customize it.

That's done. The new ContextEngine interface exposes 7 lifecycle hooks (bootstrap, ingest, assemble, compact, afterTurn, prepareSubagentSpawn, onSubagentEnded) that let third-party plugins control the entire memory pipeline. The Epsilla technical analysis describes this as a shift from monolithic design to a pluggable platform model.

The first plugin built on the interface is lossless-claw by Martian Engineering. Instead of the default sliding-window approach (where old messages get dropped), lossless-claw persists every message to SQLite using a DAG-based summarization system. Nothing gets thrown away. Agents can search and recover details from compressed history.

If you don't install any plugin, the system falls back to LegacyContextEngine, which wraps the old behavior. Zero change for existing users who don't touch it.

GPT-5.4 and Gemini 3.1 Flash-Lite as Defaults

The default model aliases got bumped. The gpt alias now points to GPT-5.4 (1.05 million token context window, 33% fewer hallucinations compared to GPT-5.2). The gemini alias points to Gemini 3.1 Flash-Lite, which runs 2.5x faster to first token at $0.25 per million input tokens. If you're watching your token spend, Flash-Lite is worth a look.

Worth noting: the previous Gemini default was pointing at a model ID that Google deprecated on March 9. So this was partly a bug fix, not just an upgrade.

If you pinned a specific model version in your config, nothing changes. The alias update only affects users on defaults.

Persistent Channel Bindings

Before 3.7, restarting your OpenClaw instance meant all Discord channel and Telegram topic bindings got wiped. You had to rebind agents manually after every restart, update, or crash. For anyone running production agent operations, this was a real problem.

Bindings now persist across restarts. Your agent assigned to a Discord channel stays assigned through updates, crashes, and maintenance windows.

Breaking Change: Gateway Auth

If you have both gateway.auth.token AND gateway.auth.password configured (including via SecretRefs), you now need to set gateway.auth.mode: token|password explicitly. Without it, startup will fail. Self-hosted users upgrading should also review our security hardening guide for related best practices.

ClawHosters users: We already handled this. All managed instances were updated to v2026.3.7 automatically, including the auth mode migration. No action needed on your end.

Other Fixes

Slim Docker builds (OPENCLAW_VARIANT=slim), SecretRef support for gateway auth tokens, an Ollama fix that stopped reasoning tokens from leaking into output, and a Spanish locale addition round out the release. For the full list, check the release notes on GitHub.

If you're running OpenClaw self-hosted and want to skip the upgrade hassle, take a look at our managed hosting. Updates land automatically, breaking changes get handled, and you can focus on building your agents instead of maintaining infrastructure.

Frequently Asked Questions

The ContextEngine is a new plugin interface that lets third-party code control how OpenClaw manages agent memory. It has 7 lifecycle hooks covering how messages get stored, assembled into prompts, and compacted when context limits approach. The first plugin using it is lossless-claw, which preserves all conversation history.

Only if you're on the default `gpt` model alias. The alias now points to GPT-5.4 instead of GPT-5.2. If you configured a specific model ID like `openai/gpt-5.2` in your setup, nothing changes. You keep the model you chose.

No. All managed instances were updated automatically. The breaking change around gateway auth mode was handled during the migration. Your agents kept running without interruption.

No. The default behavior uses LegacyContextEngine, which preserves the old sliding-window compaction. You need to install lossless-claw separately if you want DAG-based persistent memory. On ClawHosters, you can enable it through the instance settings.

OpenClaw supports any model accessible through its configured providers. The v2026.3.7 update specifically bumped the default aliases to GPT-5.4 and Gemini 3.1 Flash-Lite, but you can still use Claude, Mistral, local Ollama models, or any other supported provider. Check our guide on choosing the best AI model for comparisons.
*Last updated: March 2026*

Sources

  1. 1 v2026.3.7 shipped on March 8
  2. 2 Epsilla technical analysis
  3. 3 lossless-claw
  4. 4 GPT-5.4
  5. 5 Gemini 3.1 Flash-Lite
  6. 6 token spend
  7. 7 security hardening guide
  8. 8 managed instances
  9. 9 guide on choosing the best AI model