OpenClaw Provenance-Rich Memory: People-Aware Recall and Provider Catalogs
OpenClaw's April 2026 release added provenance to every memory item — who said it, when, in what context, from which provider. Here's why memory provenance changes how an agent reasons.
If you have ever shared a long-running OpenClaw agent with a teammate, you have probably hit this: it remembers something you told it last week, but it doesn't remember that you told it. So when your teammate asks about the same topic this morning, OpenClaw confidently quotes back something they never agreed to.
The April 2026 release fixed this by making memory provenance a first-class concept. Every memory item now carries who, when, where from, and why it was stored.
Key Takeaways
- OpenClaw memory now stores provenance: user, timestamp, source channel, source provider, and the reasoning context that wrote it.
- "People-aware" recall means recall queries can filter by who said something, not just what was said.
- Provider catalogs give OpenClaw a structured map of which integrations are available, with capability metadata and trust levels.
- Memory items that contradict each other can be arbitrated by recency, trust, and speaker rather than collapsing into a single confused average.
- Pairs with the 2026.5.7 stale-context invalidation so old memories do not pollute new decisions.
- Memory schema is now compatible with the same audit patterns you might use with Hermes Agent's FTS5 memory.
Why Provenance Matters in an Agent Memory
A naive memory store treats every fact as equally true. If two memory items say different things, the agent has to guess. Provenance changes that calculus.
Consider a small example. Two memories:
- "User prefers Tailwind over Chakra UI." (stored 2026-02-14, from a chat with user
alice) - "User prefers Chakra over Tailwind." (stored 2026-04-30, from a chat with user
bob)
Without provenance, an agent might surface either one, or worse, average them into nonsense like "user has mixed feelings." With provenance, the agent can see that these are different users' preferences and route accordingly.
The same logic applies to time: a fact stated in February 2025 about pricing may be wrong now. An agent that knows the when can downweight stale memories.
What an OpenClaw Memory Item Looks Like
After 2026.4.29, a memory item carries the following structure:
memory:
id: mem_2026_04_30_a1b2
content: "User prefers Chakra over Tailwind for new projects."
embedding: <vector>
provenance:
speaker: "user:bob"
speaker_role: "engineer"
written_at: "2026-04-30T11:14:23Z"
via: "channel:slack"
provider: "anthropic"
model: "claude-sonnet-4-6"
flow_id: "flow_onboarding_2026_q2"
trust: "verified"
freshness:
ttl: 7776000 # 90 days
tags:
- "preferences"
- "frontend"
Every field is queryable. Recall is no longer "find the closest embedding" — it is "find the closest embedding that matches these provenance predicates."
People-Aware Recall
The most user-visible consequence is that OpenClaw can now answer questions like:
- "What does Alice think about this?"
- "What have we agreed in flows that involved the ops team?"
- "What did this customer say in their last support ticket, ignoring anything they told other agents?"
This requires no special tooling. The recall API accepts provenance filters:
memories = openclaw.memory.recall(
query="frontend framework preferences",
speaker="user:bob",
freshness="active", # not expired
min_trust="verified",
)
The agent's behaviour then becomes legibly correct. When it surfaces a preference, it knows whose preference, and it can say so out loud: "Bob said in April that he prefers Chakra."
Provider Catalogs
A second piece of the April release: provider catalogs. OpenClaw integrates with many model providers, tool providers, and external systems. Before April, those integrations were each their own ad-hoc plugin. After April, they share a catalog format:
provider: stripe
capabilities:
- id: lookup_charge
inputs: { charge_id: string }
outputs: { amount: int, status: string }
risk: low
- id: issue_refund
inputs: { charge_id: string, amount: int }
outputs: { refund_id: string }
risk: high
requires_approval: true
trust_level: vendor-verified
auth: { mode: api_key, scope: "charges:read,refunds:write" }
When the agent reasons about which tool to call, the catalog tells it the input/output shape and the risk profile. High-risk capabilities can be gated behind a TaskFlow human-approval step automatically.
Provider catalogs also feed the boundary credential resolution shipped in 2026.5.7: the catalog declares which scopes a capability needs, and the boundary enforces it.
Memory Arbitration
The biggest behavioural change in agent reasoning: when memories disagree, OpenClaw now arbitrates rather than averaging.
The default arbitration policy considers:
- Speaker match. A memory from the current user beats a memory from another.
- Recency. A more recent memory beats an older one unless trust differs.
- Trust. A
verifiedmemory beats aninferredone even if older. - Flow scope. A memory written inside the current flow beats one from an unrelated flow.
You can override the policy per-agent. The point is that the agent has a policy at all, where before it had only "whatever the embedding search returned first."
Practical Adoption
A few patterns worth borrowing if you are migrating an older OpenClaw setup:
- Annotate all imports. If you bulk-imported docs into OpenClaw memory, backfill provenance: source URL, last fetched, trust level. Otherwise these become indistinguishable from user-spoken facts.
- Set TTLs on volatile facts. Pricing, headcount, integration availability — anything that changes. Use the freshness predicate from the stale context invalidation work.
- Use speaker filters in shared agents. A team agent should default to recalling the current speaker's preferences, not the loudest historical speaker's.
Bringing It Back Together
Provenance is one of those changes that does not feel transformative on day one and becomes load-bearing by day thirty. Once memory carries who, when, and from where, the agent's behaviour becomes legible — and legible behaviour is what makes a shared agent actually shareable.
For the broader picture, see OpenClaw vs Hermes Agent and the Hermes FTS5 memory deep-dive for a contrasting design.
Sources
- OpenClaw releases overview — https://openclaw.com.au/updates
- OpenClaw GitHub releases — https://github.com/openclaw/openclaw/releases
- OpenClaw documentation — https://docs.openclaw.ai
- Related: OpenClaw TaskFlow Orchestration
- Related: OpenClaw 2026.5.7 Reliability Wins
- Related: Hermes Memory FTS5 Deep-Dive