Simon Willison's lethal trifecta described three conditions that make an AI agent dangerous: access to private data, exposure to untrusted content, and the ability to communicate externally. Get all three in one system and you have a security problem. OpenClaw has all three by design.
Now Palo Alto Networks argues there's a fourth element that makes it worse.
Persistent Memory Changes the Math
Their February 2026 paper identifies persistent memory as an accelerant. Their exact words: "Persistent memory acts as an accelerant, amplifying the risks highlighted by the lethal trifecta."
The problem is SOUL.md and MEMORY.md. Both files get loaded at boot and treated as trusted configuration. But here's the catch: agent tools can modify them at runtime. As Penligent's research puts it, "If an attacker can trick the agent into writing a malicious instruction into its own SOUL.md, that instruction becomes part of the agent's permanent operating system."
That opens up attack classes that didn't exist before. Time-shifted prompt injection, where a payload gets injected on one day and detonates when agent state aligns later. Memory poisoning. Logic bomb-style activation across sessions.
Palo Alto mapped OpenClaw to every single category in the OWASP Top 10 for Agentic Applications. Every one.
How Bad Can It Get
Obsidian Security found that a single compromised agent poisoned 87% of downstream decision-making within four hours. That's not a theoretical exercise.
And Palo Alto didn't hold back on their conclusion: "The authors' opinion is that OpenClaw is not designed to be used in an enterprise ecosystem."
If you're running OpenClaw in production, you probably already know the security hardening basics. But persistent memory attacks go beyond what config hardening can catch. You need runtime monitoring, memory file integrity checks, and ideally container isolation so a compromised agent can't spread.
The OpenClaw safety scanner added memory integrity baselines in its latest update. Worth running if you haven't recently.