memory-lancedb-pro
This skill should be used when working with memory-lancedb-pro, a production-grade long-term memory MCP plugin for OpenClaw AI agents. Use when installing, c...
This skill should be used when working with memory-lancedb-pro, a production-grade long-term memory MCP plugin for OpenClaw AI agents. Use when installing, c...
Real data. Real impact.
Emerging
Developers
Per week
Open source
Skills give you superpowers. Install in 30 seconds.
Production-grade long-term memory system (v1.1.0-beta.8) for OpenClaw AI agents. Provides persistent, intelligent memory storage using LanceDB with hybrid vector + BM25 retrieval, LLM-powered Smart Extraction, Weibull decay lifecycle, and multi-scope isolation.
For full technical details (thresholds, formulas, database schema, source file map), see
references/full-reference.md.
When the user says "help me enable the best config", "apply optimal configuration", or similar, follow this exact procedure:
Present these three plans in a clear comparison, then ask the user to pick one:
Plan A โ ๐ Full Power (Best Quality)
jina-embeddings-v5-text-small (task-aware, 1024-dim)jina-reranker-v3 (cross-encoder, same key)gpt-4o-mini (Smart Extraction)JINA_API_KEY + OPENAI_API_KEYPlan B โ ๐ฐ Budget (Free Reranker)
jina-embeddings-v5-text-smallBAAI/bge-reranker-v2-m3 (free tier available)gpt-4o-miniJINA_API_KEY + SILICONFLOW_API_KEY + OPENAI_API_KEYPlan C โ ๐ข Simple (OpenAI Only)
text-embedding-3-smallgpt-4o-miniOPENAI_API_KEY onlyPlan D โ ๐ฅ๏ธ Fully Local (Ollama, No API Keys)
mxbai-embed-large (1024-dim, recommended) or nomic-embed-text:v1.5 (768-dim, lighter)qwen3:8b (recommended โ best JSON output, native structured output, ~5.2GB)qwen3:14b (better quality, ~9GB, needs 16GB VRAM)llama4:scout (multimodal MoE, 10M ctx, ~12GB)mistral-small3.2 (24B, 128K ctx, excellent instruction following, ~15GB)mistral-nemo (12B, 128K ctx, efficient, ~7GB)systemctl start ollama or ollama serve"smartExtraction": falseAfter user selects a plan, ask in one message:
openclaw.json? (Skip if you want me to find it automatically)If the user already stated their provider/keys in context, skip asking and proceed.
Do NOT proceed to Step 2 until API keys have been collected and verified (Step 2 below).
Run ALL key checks for the chosen plan before touching any config. If any check fails, STOP and tell the user which key failed and why. Do not proceed to Step 3.
Plan A / Plan B โ Jina embedding check:
curl -s -o /dev/null -w "%{http_code}" \ https://api.jina.ai/v1/embeddings \ -H "Authorization: Bearer <JINA_API_KEY>" \ -H "Content-Type: application/json" \ -d '{"model":"jina-embeddings-v5-text-small","input":["test"]}'
Plan A / B / C โ OpenAI check:
curl -s -o /dev/null -w "%{http_code}" \ https://api.openai.com/v1/models \ -H "Authorization: Bearer <OPENAI_API_KEY>"
Plan B โ SiliconFlow reranker check:
curl -s -o /dev/null -w "%{http_code}" \ https://api.siliconflow.com/v1/rerank \ -H "Authorization: Bearer <SILICONFLOW_API_KEY>" \ -H "Content-Type: application/json" \ -d '{"model":"BAAI/bge-reranker-v2-m3","query":"test","documents":["test doc"]}'
Plan D โ Ollama check:
curl -s -o /dev/null -w "%{http_code}" http://localhost:11434/api/tags
Interpret results:
| HTTP code | Meaning | Action |
|---|---|---|
/ | Key valid, quota available | โ Continue |
/ | Invalid or expired key | โ STOP โ ask user to check key |
| Payment required / no credits | โ STOP โ ask user to top up account |
| Rate limited or quota exceeded | โ STOP โ ask user to check billing/quota |
/ connection refused | Service unreachable | โ STOP โ ask user to check network / Ollama running |
If any check fails: Tell the user exactly which provider failed, the HTTP code received, and what to fix. Do not proceed with installation until all required keys pass their checks.
If the user says keys are set as env vars in the gateway process, run checks using
${VAR_NAME} substituted inline or ask them to paste the key temporarily for verification.
Check these locations in order:
# Most common locations ls ~/.openclaw/openclaw.json ls ~/openclaw.json # Ask the gateway where it's reading config from openclaw config get --show-path 2>/dev/null || echo "not found"
If not found, ask the user for the path.
# Read and display current plugins config before changing anything openclaw config get plugins.entries.memory-lancedb-pro 2>/dev/null openclaw config get plugins.slots.memory 2>/dev/null
Check what already exists โ never blindly overwrite existing settings.
Use the config block for the chosen plan. Substitute actual API keys inline if the user provided them directly; keep
${ENV_VAR} syntax if they confirmed env vars are set in the gateway process.
Plan A config (
):plugins.entries.memory-lancedb-pro.config
{ "embedding": { "apiKey": "${JINA_API_KEY}", "model": "jina-embeddings-v5-text-small", "baseURL": "https://api.jina.ai/v1", "dimensions": 1024, "taskQuery": "retrieval.query", "taskPassage": "retrieval.passage", "normalized": true }, "autoCapture": true, "autoRecall": true, "captureAssistant": false, "smartExtraction": true, "extractMinMessages": 2, "extractMaxChars": 8000, "llm": { "apiKey": "${OPENAI_API_KEY}", "model": "gpt-4o-mini", "baseURL": "https://api.openai.com/v1" }, "retrieval": { "mode": "hybrid", "vectorWeight": 0.7, "bm25Weight": 0.3, "rerank": "cross-encoder", "rerankProvider": "jina", "rerankModel": "jina-reranker-v3", "rerankEndpoint": "https://api.jina.ai/v1/rerank", "rerankApiKey": "${JINA_API_KEY}", "candidatePoolSize": 12, "minScore": 0.6, "hardMinScore": 0.62, "filterNoise": true }, "sessionMemory": { "enabled": false } }
Plan B config:
{ "embedding": { "apiKey": "${JINA_API_KEY}", "model": "jina-embeddings-v5-text-small", "baseURL": "https://api.jina.ai/v1", "dimensions": 1024, "taskQuery": "retrieval.query", "taskPassage": "retrieval.passage", "normalized": true }, "autoCapture": true, "autoRecall": true, "captureAssistant": false, "smartExtraction": true, "extractMinMessages": 2, "extractMaxChars": 8000, "llm": { "apiKey": "${OPENAI_API_KEY}", "model": "gpt-4o-mini", "baseURL": "https://api.openai.com/v1" }, "retrieval": { "mode": "hybrid", "vectorWeight": 0.7, "bm25Weight": 0.3, "rerank": "cross-encoder", "rerankProvider": "siliconflow", "rerankModel": "BAAI/bge-reranker-v2-m3", "rerankEndpoint": "https://api.siliconflow.com/v1/rerank", "rerankApiKey": "${SILICONFLOW_API_KEY}", "candidatePoolSize": 12, "minScore": 0.5, "hardMinScore": 0.55, "filterNoise": true }, "sessionMemory": { "enabled": false } }
Plan C config:
{ "embedding": { "apiKey": "${OPENAI_API_KEY}", "model": "text-embedding-3-small", "baseURL": "https://api.openai.com/v1" }, "autoCapture": true, "autoRecall": true, "captureAssistant": false, "smartExtraction": true, "extractMinMessages": 2, "extractMaxChars": 8000, "llm": { "apiKey": "${OPENAI_API_KEY}", "model": "gpt-4o-mini", "baseURL": "https://api.openai.com/v1" }, "retrieval": { "mode": "hybrid", "vectorWeight": 0.7, "bm25Weight": 0.3, "filterNoise": true, "minScore": 0.3, "hardMinScore": 0.35 }, "sessionMemory": { "enabled": false } }
Plan D config (replace models as needed โ
recommended for LLM, qwen3:8b
for embedding):mxbai-embed-large
{ "embedding": { "apiKey": "ollama", "model": "mxbai-embed-large", "baseURL": "http://localhost:11434/v1", "dimensions": 1024 }, "autoCapture": true, "autoRecall": true, "captureAssistant": false, "smartExtraction": true, "extractMinMessages": 2, "extractMaxChars": 4000, "llm": { "apiKey": "ollama", "model": "qwen3:8b", "baseURL": "http://localhost:11434/v1" }, "retrieval": { "mode": "hybrid", "vectorWeight": 0.7, "bm25Weight": 0.3, "filterNoise": true, "minScore": 0.25, "hardMinScore": 0.28 }, "sessionStrategy": "none" }
Plan D prerequisites โ run BEFORE applying config:
# 1. Verify Ollama is running (should return JSON with model list) curl http://localhost:11434/api/tags2. Pull embedding model (choose one):
ollama pull mxbai-embed-large # recommended: 1024-dim, beats text-embedding-3-large, ~670MB ollama pull snowflake-arctic-embed2 # best multilingual local option, ~670MB ollama pull nomic-embed-text:v1.5 # classic stable, 768-dim, ~270MB
3. Pull LLM for Smart Extraction (choose one based on RAM):
ollama pull qwen3:8b # recommended: best JSON/structured output, ~5.2GB ollama pull qwen3:14b # better quality, ~9GB, needs 16GB VRAM ollama pull llama4:scout # multimodal MoE, 10M ctx, ~12GB ollama pull mistral-small3.2 # 24B, 128K ctx, excellent, ~15GB ollama pull mistral-nemo # 12B, 128K ctx, efficient, ~7GB
4. Verify models are installed
ollama list
5. Quick sanity check โ embedding endpoint works:
curl http://localhost:11434/v1/embeddings
-H "Content-Type: application/json"
-d '{"model":"mxbai-embed-large","input":"test"}'Should return a JSON with a 1024-element vector
If Smart Extraction produces garbled/invalid output: The local LLM may not support structured JSON reliably. Try
qwen3:8b first โ it has native structured output support. If still failing, disable:
{ "smartExtraction": false }
If Ollama is on a different host or Docker: Replace
http://localhost:11434/v1 with the actual host, e.g. http://192.168.1.100:11434/v1. Also set OLLAMA_HOST=0.0.0.0 in the Ollama process to allow remote connections.
For the
block, merge into the existing plugins.entries.memory-lancedb-pro.config
openclaw.json rather than replacing the whole file. Use a targeted edit of only the memory plugin config section.
Read the current
openclaw.json first, then apply a surgical edit to the plugins.entries.memory-lancedb-pro section. Use the template that matches your installation method:
Method 1 โ
(plugin was installed via the plugin manager):
No openclaw plugins install
load.paths or allow needed โ the plugin manager already registered the plugin.
{ "plugins": { "slots": { "memory": "memory-lancedb-pro" }, "entries": { "memory-lancedb-pro": { "enabled": true, "config": { "<<OPTIMAL CONFIG HERE>>" } } } } }
Method 2 โ git clone with manual path (workspace plugin): Both
load.paths AND allow are required โ workspace plugins are disabled by default.
{ "plugins": { "load": { "paths": ["plugins/memory-lancedb-pro"] }, "allow": ["memory-lancedb-pro"], "slots": { "memory": "memory-lancedb-pro" }, "entries": { "memory-lancedb-pro": { "enabled": true, "config": { "<<OPTIMAL CONFIG HERE>>" } } } } }
openclaw config validate openclaw gateway restart openclaw logs --follow --plain | rg "memory-lancedb-pro"
Expected output confirms:
memory-lancedb-pro: smart extraction enabledmemory-lancedb-pro@...: plugin registeredopenclaw plugins info memory-lancedb-pro openclaw hooks list --json | grep -E "before_agent_start|agent_end|command:new" openclaw memory-pro stats
Then do a quick smoke test:
memory_store with text: "test memory for verification"memory_recall with query: "test memory"For new users, the community one-click installer handles everything automatically โ path detection, schema validation, auto-update, provider selection, and rollback:
curl -fsSL https://raw.githubusercontent.com/CortexReach/toolbox/main/memory-lancedb-pro-setup/setup-memory.sh -o setup-memory.sh bash setup-memory.sh
Options:
--dry-run (preview only), --beta (include pre-release), --ref v1.2.0 (pin version), --selfcheck-only, --uninstall.
Source: https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup
22.16+)# Install from npm registry (@beta tag = latest pre-release, e.g. 1.1.0-beta.8) openclaw plugins install memory-lancedb-pro@betaInstall stable release from npm (@latest tag, e.g. 1.0.32)
openclaw plugins install memory-lancedb-pro
Or install from a local git clone โ use master branch (matches npm @beta)
git clone -b master https://github.com/CortexReach/memory-lancedb-pro.git /tmp/memory-lancedb-pro openclaw plugins install /tmp/memory-lancedb-pro
npm vs GitHub branches:
installs from the npm registry (not directly from GitHub). The repo has two long-lived branches:@betais the release branch (matches npmmaster),@betais older/behind. Always clonemainif you want code that matches the published beta.master
Then bind the memory slot and add your config (see Configuration section below):
{ "plugins": { "slots": { "memory": "memory-lancedb-pro" }, "entries": { "memory-lancedb-pro": { "enabled": true, "config": { "<<your config here>>" } } } } }
Restart and verify:
openclaw gateway restart openclaw plugins info memory-lancedb-pro
โ ๏ธ Critical: Workspace plugins (git-cloned paths) are disabled by default in OpenClaw. You MUST explicitly enable them.
# 1. Clone into workspace cd /path/to/your/openclaw/workspace git clone -b master https://github.com/CortexReach/memory-lancedb-pro.git plugins/memory-lancedb-pro cd plugins/memory-lancedb-pro && npm install
Add to
openclaw.json โ the enabled: true and the allow entry are both required:
{ "plugins": { "load": { "paths": ["plugins/memory-lancedb-pro"] }, "allow": ["memory-lancedb-pro"], "slots": { "memory": "memory-lancedb-pro" }, "entries": { "memory-lancedb-pro": { "enabled": true, "config": { "embedding": { "apiKey": "${JINA_API_KEY}", "model": "jina-embeddings-v5-text-small", "baseURL": "https://api.jina.ai/v1", "dimensions": 1024, "taskQuery": "retrieval.query", "taskPassage": "retrieval.passage", "normalized": true } } } } } }
Validate and restart:
openclaw config validate openclaw gateway restart openclaw logs --follow --plain | rg "memory-lancedb-pro"
Expected log output:
memory-lancedb-pro: smart extraction enabledmemory-lancedb-pro@...: plugin registeredUse absolute paths in
plugins.load.paths. Add to plugins.allow. Bind memory slot: plugins.slots.memory = "memory-lancedb-pro". Set plugins.entries.memory-lancedb-pro.enabled: true.
Then restart and verify:
openclaw config validate openclaw gateway restart openclaw logs --follow --plain | rg "memory-lancedb-pro"
After the plugin starts successfully, determine which scenario applies and run the corresponding steps:
Scenario A โ Coming from built-in
plugin (most common upgrade path)memory-lancedb
The old plugin stores data in LanceDB at
~/.openclaw/memory/lancedb. Use the migrate command:
# 1. Check if old data exists and is readable openclaw memory-pro migrate check2. Preview what would be migrated (dry run)
openclaw memory-pro migrate run --dry-run
3. Run the actual migration
openclaw memory-pro migrate run
4. Verify migrated data
openclaw memory-pro migrate verify openclaw memory-pro stats
If the old database is at a non-default path:
openclaw memory-pro migrate check --source /path/to/old/lancedb openclaw memory-pro migrate run --source /path/to/old/lancedb
Scenario B โ Existing memories exported as JSON
If you have memories in the standard JSON export format:
# Preview import (dry run) openclaw memory-pro import memories.json --scope global --dry-runImport
openclaw memory-pro import memories.json --scope global
Expected JSON schema:
{ "version": "1.0", "memories": [ { "text": "Memory content (required)", "category": "preference|fact|decision|entity|other", "importance": 0.7, "timestamp": 1234567890000 } ] }
Scenario C โ Memories stored in Markdown files (AGENTS.md, MEMORY.md, etc.)
There is no direct markdown import โ the import command only accepts JSON. You need to convert first.
Manual conversion approach:
text, category, importanceopenclaw memory-pro importOr use
memory_store tool directly in the agent to store individual entries one at a time:
memory_store(text="<extracted memory>", category="fact", importance=0.8)
Note: Markdown-based memory files (MEMORY.md, AGENTS.md) are workspace context files, not the same as the LanceDB memory store. You only need to migrate them if you want that content searchable via
.memory_recall
Scenario D โ Fresh install, no prior memories
No migration needed. Verify the plugin is working with a quick smoke test:
openclaw memory-pro stats # should show 0 memories
Then trigger a conversation โ
autoCapture will start storing memories automatically.
No manual action required for LanceDB version changes.
The plugin requires
@lancedb/lancedb ^0.26.2 as an npm dependency โ this is installed automatically when you install or update the plugin. You do not need to manually install or upgrade LanceDB.
LanceDB 0.26+ changed how numeric columns are returned (Arrow
BigInt type for timestamp, importance, _distance, _score). The plugin handles this transparently at runtime via internal Number(...) coercion โ no migration commands are needed when moving between LanceDB versions.
TL;DR: LanceDB version compatibility is fully automatic. See the table below for when each maintenance command actually applies.
Command distinction (important):
| Command | When to use |
|---|---|
| Update plugin code after a new release (npm-installed only) |
| Update all npm-installed plugins at once |
| Enrich old memory-lancedb-pro entries that predate the smart-memory schema (missing L0/L1/L2 metadata + 6-category system) โ NOT related to LanceDB version |
| One-time migration from the separate built-in plugin โ Pro |
| Rebuild all embeddings after switching embedding model or provider |
When do you need
?memory-pro upgrade
Run it if you installed memory-lancedb-pro before the smart-memory format was introduced (i.e., entries are missing
memory_category in their metadata). Signs you need it:
memory_recall returns results but without meaningful categoriesmemory-pro list --json shows entries with no l0_abstract / l1_overview fieldsSafe upgrade sequence:
# 1. Backup first openclaw memory-pro export --scope global --output memories-backup.json2. Preview what would change
openclaw memory-pro upgrade --dry-run
3. Run upgrade (uses LLM by default for L0/L1/L2 generation)
openclaw memory-pro upgrade
4. Verify results
openclaw memory-pro stats openclaw memory-pro search "your known keyword" --scope global --limit 5
Upgrade options:
openclaw memory-pro upgrade --no-llm # skip LLM, use simple text truncation openclaw memory-pro upgrade --batch-size 5 # slower but safer for large collections openclaw memory-pro upgrade --limit 50 # process only first N entries openclaw memory-pro upgrade --scope global # limit to one scope
openclaw plugins list # show all discovered plugins openclaw plugins info memory-lancedb-pro # show plugin status and config openclaw plugins enable memory-lancedb-pro # enable a disabled plugin openclaw plugins disable memory-lancedb-pro # disable without removing openclaw plugins update memory-lancedb-pro # update npm-installed plugin openclaw plugins update --all # update all npm plugins openclaw plugins doctor # health check for all plugins openclaw plugins install ./path/to/plugin # install local plugin (copies + enables) openclaw plugins install @scope/plugin@beta # install from npm registry openclaw plugins install -l ./path/to/plugin # symlink for dev (no copy)
Gateway restart required after:
,plugins install,plugins enable,plugins disable, or any change toplugins update. Changes do not take effect until the gateway is restarted.openclaw.jsonopenclaw gateway restart
openclaw.json, you MUST run openclaw gateway restart โ changes are NOT hot-reloaded.plugins.allow: ["memory-lancedb-pro"] AND plugins.entries.memory-lancedb-pro.enabled: true โ without these the plugin silently does not load.${OPENAI_API_KEY} requires env vars set in the OpenClaw Gateway service processโnot just your shell.plugins.load.paths.baseURL not baseUrl: The embedding (and llm) config field is baseURL (capital URL), NOT baseUrl. Using the wrong casing causes a schema validation error: "must NOT have additional properties". Also note the required /v1 suffix: http://localhost:11434/v1, not http://localhost:11434. Do not confuse with agents.defaults.memorySearch.remote.baseUrl which uses a different casing..ts files under plugins, run rm -rf /tmp/jiti/ BEFORE openclaw gateway restart.entries, allow, deny, or slots as validation errors. The plugin id must be discoverable before referencing it.llm section separately โ it falls back to embedding key/URL otherwise.scopes.agentAccess mapping โ without it, agents only see global scope./new command โ test with an actual /new invocation.rerankApiKey AND rerankEndpoint.openclaw config get plugins.entries.memory-lancedb-pro to verify what's actually loaded.OPENCLAW_HOME โ sets the root config/data directory (default: ~/.openclaw/)OPENCLAW_CONFIG_PATH โ absolute path to openclaw.json overrideOPENCLAW_STATE_DIR โ override for runtime state/data directory
Set these in the OpenClaw Gateway process's environment if the default ~/.openclaw/ path is not appropriate.openclaw doctor # full health check (recommended) openclaw config validate # config schema check only openclaw plugins info memory-lancedb-pro # plugin status openclaw plugins doctor # plugin-specific health openclaw hooks list --json | grep memory # confirm hooks registered openclaw memory-pro stats openclaw memory-pro list --scope global --limit 5
Full smoke test checklist:
enabled: true and config loadedbefore_agent_start, agent_end, command:newmemory_store โ memory_recall round trip via tools/new testConfig validation tool (from CortexReach/toolbox):
# Download once curl -fsSL https://raw.githubusercontent.com/CortexReach/toolbox/main/memory-lancedb-pro-setup/scripts/config-validate.mjs -o config-validate.mjs # Run against your openclaw.json node config-validate.mjs # Or validate a specific config snippet node config-validate.mjs --json '{"embedding":{"baseURL":"http://localhost:11434/v1","model":"bge-m3","apiKey":"ollama"}}'
Exit code 0 = pass/warn, 1 = errors found.
| Error message | Root cause | Fix |
|---|---|---|
+ | Field name typo in embedding config (e.g. instead of ) | Check all field names against the schema table below โ field names are case-sensitive |
(top-level config) | Unknown top-level field in plugin config | Remove or correct the field |
/ plugin silently not loading | missing (git-clone install) or | Add and set , then restart |
validation error | Plugin referenced in / before it's discoverable | Install/register the plugin first, then add config references |
not expanding / auth errors despite env var set | Env var not set in the gateway process environment | Set the env var in the service that runs OpenClaw gateway, not just your shell |
Hooks (, ) not firing | Gateway not restarted after install/config change | Run |
| Embedding errors with Ollama | Wrong format | Must be (with ), field must be not |
shows 0 entries after conversation | false or not reached | Set ; need at least (default 2) turns |
| Memories not injected before agent replies | is false (schema default) | Explicitly set |
cache error after editing plugin files | Stale compiled cache | Run then |
{ "embedding": { "provider": "openai-compatible", "apiKey": "${OPENAI_API_KEY}", "model": "text-embedding-3-small" }, "autoCapture": true, "autoRecall": true, "smartExtraction": true, "extractMinMessages": 2, "extractMaxChars": 8000, "sessionMemory": { "enabled": false } }
Note:
autoRecall is disabled by default in the plugin schema โ explicitly set it to true for new deployments.
Uses Jina for both embedding and reranking โ best retrieval quality:
{ "embedding": { "apiKey": "${JINA_API_KEY}", "model": "jina-embeddings-v5-text-small", "baseURL": "https://api.jina.ai/v1", "dimensions": 1024, "taskQuery": "retrieval.query", "taskPassage": "retrieval.passage", "normalized": true }, "dbPath": "~/.openclaw/memory/lancedb-pro", "autoCapture": true, "autoRecall": true, "captureAssistant": false, "smartExtraction": true, "extractMinMessages": 2, "extractMaxChars": 8000, "enableManagementTools": false, "llm": { "apiKey": "${OPENAI_API_KEY}", "model": "gpt-4o-mini", "baseURL": "https://api.openai.com/v1" }, "retrieval": { "mode": "hybrid", "vectorWeight": 0.7, "bm25Weight": 0.3, "rerank": "cross-encoder", "rerankProvider": "jina", "rerankModel": "jina-reranker-v3", "rerankEndpoint": "https://api.jina.ai/v1/rerank", "rerankApiKey": "${JINA_API_KEY}", "candidatePoolSize": 12, "minScore": 0.6, "hardMinScore": 0.62, "filterNoise": true, "lengthNormAnchor": 500, "timeDecayHalfLifeDays": 60, "reinforcementFactor": 0.5, "maxHalfLifeMultiplier": 3 }, "sessionMemory": { "enabled": false, "messageCount": 15 } }
Why these settings excel:
taskQuery/taskPassage) optimized for retrievalcandidatePoolSize: 12 + minScore: 0.6: Aggressive filtering reduces noisecaptureAssistant: false: Prevents storing agent-generated boilerplatesessionMemory: false: Avoids polluting retrieval with session summaries{ "embedding": { "apiKey": "${JINA_API_KEY}", "model": "jina-embeddings-v5-text-small", "baseURL": "https://api.jina.ai/v1", "dimensions": 1024, "taskQuery": "retrieval.query", "taskPassage": "retrieval.passage", "normalized": true }, "dbPath": "~/.openclaw/memory/lancedb-pro", "autoCapture": true, "autoRecall": true, "captureAssistant": false, "smartExtraction": true, "llm": { "apiKey": "${OPENAI_API_KEY}", "model": "gpt-4o-mini", "baseURL": "https://api.openai.com/v1" }, "extractMinMessages": 2, "extractMaxChars": 8000, "enableManagementTools": false, "retrieval": { "mode": "hybrid", "vectorWeight": 0.7, "bm25Weight": 0.3, "minScore": 0.3, "hardMinScore": 0.35, "rerank": "cross-encoder", "rerankProvider": "jina", "rerankModel": "jina-reranker-v3", "rerankEndpoint": "https://api.jina.ai/v1/rerank", "rerankApiKey": "${JINA_API_KEY}", "candidatePoolSize": 20, "recencyHalfLifeDays": 14, "recencyWeight": 0.1, "filterNoise": true, "lengthNormAnchor": 500, "timeDecayHalfLifeDays": 60, "reinforcementFactor": 0.5, "maxHalfLifeMultiplier": 3 }, "scopes": { "default": "global", "definitions": { "global": { "description": "Shared knowledge" }, "agent:discord-bot": { "description": "Discord bot private" } }, "agentAccess": { "discord-bot": ["global", "agent:discord-bot"] } }, "sessionStrategy": "none", "memoryReflection": { "storeToLanceDB": true, "injectMode": "inheritance+derived", "agentId": "memory-distiller", "messageCount": 120, "maxInputChars": 24000, "thinkLevel": "medium" }, "selfImprovement": { "enabled": true, "beforeResetNote": true, "ensureLearningFiles": true }, "mdMirror": { "enabled": false }, "decay": { "recencyHalfLifeDays": 30, "recencyWeight": 0.4, "frequencyWeight": 0.3, "intrinsicWeight": 0.3, "betaCore": 0.8, "betaWorking": 1.0, "betaPeripheral": 1.3 }, "tier": { "coreAccessThreshold": 10, "coreCompositeThreshold": 0.7, "coreImportanceThreshold": 0.8, "workingAccessThreshold": 3, "workingCompositeThreshold": 0.4, "peripheralCompositeThreshold": 0.15, "peripheralAgeDays": 60 } }
| Field | Type | Default | Description |
|---|---|---|---|
| string | โ | API key (supports ); array for multi-key failover |
| string | โ | Model identifier |
| string | provider default | API endpoint |
| number | provider default | Vector dimensionality |
| string | โ | Task hint for query embeddings () |
| string | โ | Task hint for passage embeddings () |
| boolean | false | Request L2-normalized embeddings |
| string | | Provider type selector |
| boolean | true | Auto-chunk documents exceeding embedding context limits |
| Field | Type | Default | Description |
|---|---|---|---|
| string | | LanceDB data directory |
| boolean | true | Auto-extract memories after agent replies (via hook) |
| boolean | false (schema default) | Inject memories before agent processing โ set to true explicitly |
| boolean | false | Include assistant messages in extraction |
| boolean | true | LLM-powered 6-category extraction |
| number | 2 | Min conversation turns before extraction triggers |
| number | 8000 | Max context chars sent to extraction LLM |
| boolean | false | Register CLI management tools as agent tools |
| number | 15 | Min prompt chars to trigger auto-recall (6 for CJK) |
| number | 0 | Min turns before same memory can re-inject in same session |
| string | | Session pipeline: / / |
| number | 3 | Max memories injected per auto-recall (max 20) |
| string | | Selection algorithm: / / |
| array | | Categories eligible for auto-recall injection |
| boolean | true | Exclude reflection-type memories from auto-recall |
| number | 30 | Max age (days) of memories considered for auto-recall |
| number | 10 | Max entries per scope key in auto-recall results |
| Field | Type | Default | Description |
|---|---|---|---|
| string | falls back to | LLM API key |
| string | | LLM model for extraction |
| string | falls back to | LLM endpoint |
| Field | Type | Default | Description |
|---|---|---|---|
| string | | / (-only mode does not exist in schema) |
| number | 0.7 | Weight for vector search |
| number | 0.3 | Weight for BM25 full-text search |
| number | 0.3 | Minimum relevance threshold |
| number | 0.35 | Hard cutoff post-reranking |
| string | | Reranking strategy: / / |
| string | | / / / / (Docker Model Runner) |
| string | | Reranker model name |
| string | provider default | Reranker API URL |
| string | โ | Reranker API key |
| number | 20 | Candidates to rerank before final filter |
| number | 14 | Freshness decay half-life |
| number | 0.1 | Weight of recency in scoring |
| number | 60 | Memory age decay factor |
| number | 0.5 | Access-based half-life multiplier (0โ2, set 0 to disable) |
| number | 3 | Hard cap on reinforcement boost |
| boolean | true | Filter refusals, greetings, etc. |
| number | 500 | Reference length for normalization (chars) |
Access reinforcement note: Reinforcement is whitelisted to
source: "manual" only โ auto-recall does NOT strengthen memories, preventing noise amplification.
Use
sessionStrategy (top-level field) to configure the session pipeline:
| Value | Behavior |
|---|---|
(default) | Built-in session memory (simpler) |
| Advanced LLM-powered reflection with inheritance/derived injection |
| Session summaries disabled |
config (used when memoryReflection
sessionStrategy: "memoryReflection"):
| Field | Type | Default | Description |
|---|---|---|---|
| boolean | true | Persist reflections to LanceDB |
| boolean | true | Also write legacy combined row |
| string | | / |
| string | โ | Dedicated reflection agent (e.g. ) |
| number | 120 | Messages to include in reflection |
| number | 24000 | Max chars sent to reflection LLM |
| number | 20000 | Reflection LLM timeout (ms) |
| string | | Reasoning depth: / / / / |
| number | 3 | Max error entries injected into reflection |
| boolean | true | Deduplicate error signals before injection |
sub-object (controls which past reflections are retrieved for injection):memoryReflection.recall
| Field | Type | Default | Description |
|---|---|---|---|
| string | | Recall mode: / |
| number | 6 | Max reflection entries retrieved (max 20) |
| array | | Which kinds to include: / |
| number | 45 | Max age of reflections to retrieve |
| number | 10 | Max entries per scope key |
| number | 2 | Min times an entry must appear to be included |
| number | 0.18 | Minimum relevance score (range 0โ5) |
| number | 8 | Min prompt length to trigger recall |
โ ๏ธ
is a legacy compatibility shim since v1.1.0. PrefersessionMemoryinstead.sessionStrategy
โ maps tosessionMemory.enabled: truesessionStrategy: "systemSessionMemory" โ maps tosessionMemory.enabled: falsesessionStrategy: "none"
| Field | Type | Default | Description |
|---|---|---|---|
| boolean | false | Legacy: enable session summaries on |
| number | 15 | Legacy: maps to |
| Field | Type | Default | Description |
|---|---|---|---|
| boolean | true | Enable self-improvement tools ( etc.) โ on by default |
| boolean | true | Inject learning reminder before session reset |
| boolean | true | Skip bootstrap for sub-agents |
| boolean | true | Auto-create / if missing |
Tool activation rules:
self_improvement_log: requires selfImprovement.enabled: true (default โ active unless explicitly disabled)self_improvement_extract_skill + self_improvement_review: additionally require enableManagementTools: true| Field | Type | Default | Description |
|---|---|---|---|
| boolean | false | Mirror memory entries as files |
| string | โ | Directory for markdown mirror files |
| Field | Type | Default | Description |
|---|---|---|---|
| number | 30 | Base Weibull decay half-life |
| number | 0.4 | Weight of recency in lifecycle score (distinct from ) |
| number | 0.3 | Weight of access frequency |
| number | 0.3 | Weight of importance ร confidence |
| number | 0.8 | Weibull shape for core memories |
| number | 1.0 | Weibull shape for working memories |
| number | 1.3 | Weibull shape for peripheral memories |
| number | 0.9 | Minimum lifecycle score for core tier |
| number | 0.7 | Minimum lifecycle score for working tier |
| number | 0.5 | Minimum lifecycle score for peripheral tier |
| number | 0.3 | Score below which a memory is considered stale |
| number | 0.3 | Minimum search boost applied to lifecycle score |
| number | 1.5 | Multiplier for importance in lifecycle score |
| Field | Type | Default | Description |
|---|---|---|---|
| number | 10 | Access count for core promotion |
| number | 0.7 | Lifecycle score for core promotion |
| number | 0.8 | Minimum importance for core promotion |
| number | 3 | Access count for working promotion |
| number | 0.4 | Lifecycle score for working promotion |
| number | 0.15 | Score below which demotion occurs |
| number | 60 | Age threshold for stale memory demotion |
โ Search long-term memory via hybrid retrievalmemory_recall
| Parameter | Type | Required | Default | Notes |
|---|---|---|---|---|
| string | yes | โ | Search query |
| number | no | 5 | Max 20 |
| string | no | โ | Specific scope to search |
| enum | no | โ | |
โ Save information to long-term memorymemory_store
| Parameter | Type | Required | Default | Notes |
|---|---|---|---|---|
| string | yes | โ | Information to remember |
| number | no | 0.7 | Range 0โ1 |
| enum | no | โ | Memory classification |
| string | no | | Target scope |
โ Delete memories by search or direct IDmemory_forget
| Parameter | Type | Required | Notes |
|---|---|---|---|
| string | one of | Search query to locate memory |
| string | one of | Full UUID or 8+ char prefix |
| string | no | Scope for search/deletion |
โ Update memory (preserves original timestamp; memory_update
preference/entity text updates create a new versioned row preserving history)
| Parameter | Type | Required | Notes |
|---|---|---|---|
| string | yes | Full UUID or 8+ char prefix |
| string | no | New content (triggers re-embedding; / creates supersede version) |
| number | no | New score 0โ1 |
| enum | no | New classification |
enableManagementTools: true)
โ Usage statisticsmemory_stats
scope (string, optional): Filter by scope
โ List recent memories with filteringmemory_list
limit (number, optional, default 10, max 50), scope, category, offset (pagination)is enabled by default (self_improvement_log).selfImprovement.enabled: trueandself_improvement_extract_skilladditionally requireself_improvement_review.enableManagementTools: true
โ Log learning/error entries into LEARNINGS.md / ERRORS.mdself_improvement_log
| Parameter | Type | Required | Notes |
|---|---|---|---|
| enum | yes | or |
| string | yes | One-line summary |
| string | no | Detailed context |
| string | no | Action to prevent recurrence |
| string | no | Learning: ; Error: |
| string | no | |
| string | no | |
โ Create skill scaffold from a learning entryself_improvement_extract_skill
| Parameter | Type | Required | Default | Notes |
|---|---|---|---|---|
| string | yes | โ | Format or |
| string | yes | โ | Lowercase with hyphens |
| enum | no | | |
| string | no | | Relative output directory |
โ Summarize governance backlog (no parameters)self_improvement_review
LLM-powered automatic memory classification and storage triggered after conversations.
{ "smartExtraction": true, "extractMinMessages": 2, "extractMaxChars": 8000, "llm": { "apiKey": "${OPENAI_API_KEY}", "model": "gpt-4o-mini" } }
Minimal (reuses embedding API key โ no separate
llm block needed):
{ "embedding": { "apiKey": "${OPENAI_API_KEY}", "model": "text-embedding-3-small" }, "smartExtraction": true }
Disable:
{ "smartExtraction": false }
| Input Category | Stored As | Dedup Behavior |
|---|---|---|
| Profile | | Always merge (auto-consolidates) |
| Preferences | | Conditional merge |
| Entities | | Conditional merge |
| Events | | Append-only (no merge) |
| Cases | | Append-only (no merge) |
| Patterns | | Conditional merge |
CREATE | MERGE | SKIP | SUPPORT | CONTEXTUALIZE | CONTRADICT| Provider | Model | Base URL | Dimensions | Notes |
|---|---|---|---|---|
| Jina (recommended) | | | 1024 | Latest (Feb 2026), task-aware LoRA, 32K ctx |
| Jina (multimodal) | | | 1024 | Text + image, Qwen2.5-VL backbone |
| OpenAI | | | 3072 | Best OpenAI quality (MTEB 64.6%) |
| OpenAI | | | 1536 | Cost-efficient |
| DashScope (Alibaba) | | | 1024 | Recommended for Chinese users; also supports rerank (see note below) |
| Google Gemini | | | 3072 | Latest (Mar 2026), multimodal, 100+ languages |
| Google Gemini | | | 3072 | Stable text-only |
| Ollama (local) | | | 1024 | Recommended local โ beats text-embedding-3-large |
| Ollama (local) | | | 1024 | Best multilingual local option |
| Ollama (local) | | | 768 | Lightweight classic, 270MB |
DashScope rerank note: DashScope is not a
rerankProvider enum value, but its rerank API response is Jina-compatible. Use rerankProvider: "jina" with DashScope's endpoint:
"retrieval": { "rerank": "cross-encoder", "rerankProvider": "jina", "rerankModel": "qwen3-rerank", "rerankEndpoint": "https://dashscope.aliyuncs.com/compatible-api/v1/reranks", "rerankApiKey": "${DASHSCOPE_API_KEY}" }
Multi-key failover: Set
apiKey as an array for round-robin rotation on 429/503 errors.
| Provider | | Endpoint | Model | Notes |
|---|---|---|---|---|
| Jina (default) | | | | Latest text reranker (2025, Qwen3 backbone, 131K ctx) |
| Jina (multimodal) | | | | Multimodal (text+images), use when docs contain images |
| SiliconFlow | | | | Free tier available |
| Voyage AI | | | | Sends , no |
| Pinecone | | | | Pinecone customers only |
| vLLM / Docker Model Runner | | Custom endpoint | any compatible model | Self-hosted via Docker Model Runner |
Jina key can be reused for both embedding and reranking.
| Scope Format | Description |
|---|---|
| Shared across all agents |
| Agent-specific memories |
| Custom-named scopes |
| Project-specific memories |
| User-specific memories |
Default access:
global + agent:<id>. Multi-scope requires explicit scopes.agentAccess โ see Full Config above.
To disable memory entirely (unbind the slot without removing the plugin):
{ "plugins": { "slots": { "memory": "none" } } }
| Tier | Decay Floor | Beta | Behavior |
|---|---|---|---|
| Core | 0.9 | 0.8 | Gentle sub-exponential decline |
| Working | 0.7 | 1.0 | Standard exponential (default) |
| Peripheral | 0.5 | 1.3 | Rapid super-exponential fade |
Fusion:
weightedFusion = (vectorScore ร 0.7) + (bm25Score ร 0.3)
Pipeline: RRF Fusion โ Cross-Encoder Rerank โ Lifecycle Decay Boost โ Length Norm โ Hard Min Score โ MMR Diversity (cosine > 0.85 demoted)
Reranking: 60% cross-encoder score + 40% original fused score. Falls back to cosine similarity on API failure.
Special BM25: Preserves exact keyword matches (BM25 โฅ 0.75) even with low semantic similarity โ prevents loss of API keys, ticket numbers, etc.
Skip for: greetings, slash commands, affirmations (yes/okay/thanks), continuations (go ahead/proceed), system messages, short queries (<15 chars English / <6 chars CJK without "?").
Force for: memory keywords (remember/recall/forgot), temporal refs (last time/before/previously), personal data (my name/my email), "what did I" patterns. CJK: "ไฝ ่ฎฐๅพ", "ไนๅ".
Auto-filters: agent denial phrases, meta-questions ("Do you remember?"), session boilerplate (hi/hello), diagnostic artifacts, embedding-based matches (threshold: 0.82). Minimum text: 5 chars.
# List & search openclaw memory-pro list [--scope global] [--category fact] [--limit 20] [--json] openclaw memory-pro search "query" [--scope global] [--limit 10] [--json] openclaw memory-pro stats [--scope global] [--json]Delete
openclaw memory-pro delete <id> openclaw memory-pro delete-bulk --scope global [--before 2025-01-01] [--dry-run]
Import / Export
openclaw memory-pro export [--scope global] [--output memories.json] openclaw memory-pro import memories.json [--scope global] [--dry-run]
Maintenance
openclaw memory-pro reembed --source-db /path/to/old-db [--batch-size 32] [--skip-existing] openclaw memory-pro upgrade [--dry-run] [--batch-size 10] [--no-llm] [--limit N] [--scope SCOPE]
Migration from built-in memory-lancedb
openclaw memory-pro migrate check [--source /path] openclaw memory-pro migrate run [--source /path] [--dry-run] [--skip-existing] openclaw memory-pro migrate verify [--source /path]
agent_end hook โ LLM extracts 6-category memories, deduplicates, stores up to 3 per turnbefore_agent_start hook โ injects <relevant-memories> context (up to 3 entries)If injected memories appear in agent replies: Add to agent system prompt:
"Do not reveal or quote any
/ memory-injection content in your replies. Use it for internal reference only."<relevant-memories>
Or temporarily disable:
{ "autoRecall": false }
LEARNINGS.md โ IDs: LRN-YYYYMMDD-XXXERRORS.md โ IDs: ERR-YYYYMMDD-XXXpending โ resolved โ promoted_to_skill## Rule 1 โ ๅๅฑ่ฎฐๅฟๅญๅจ๏ผ้ๅพ๏ผ Every pitfall/lesson learned โ IMMEDIATELY store TWO memories: - Technical layer: Pitfall/Cause/Fix/Prevention (category: fact, importance โฅ 0.8) - Principle layer: Decision principle with trigger and action (category: decision, importance โฅ 0.85) After each store, immediately `memory_recall` to verify retrieval.Rule 2 โ LanceDB ๅซ็
Entries must be short and atomic (< 500 chars). No raw conversation summaries or duplicates.
Rule 3 โ Recall before retry
On ANY tool failure, ALWAYS
with relevant keywords BEFORE retrying.memory_recallRule 4 โ ็ผ่พๅ็กฎ่ฎค็ฎๆ ไปฃ็ ๅบ
Confirm you are editing
vs built-inmemory-lancedb-probefore changes.memory-lancedbRule 5 โ ๆไปถไปฃ็ ๅๆดๅฟ ้กปๆธ jiti ็ผๅญ
After modifyingfiles under.ts, MUST runplugins/BEFORErm -rf /tmp/jiti/.openclaw gateway restart
## /lesson command When user sends `/lesson <content>`: 1. Use memory_store with category=fact (raw knowledge) 2. Use memory_store with category=decision (actionable takeaway) 3. Confirm what was saved/remember command
When user sends
:/remember <content>
- Use memory_store with appropriate category and importance
Confirm with stored memory ID
No automatic installation available. Please visit the source repository for installation instructions.
View Installation Instructions1,500+ AI skills, agents & workflows. Install in 30 seconds. Part of the Torly.ai family.
ยฉ 2026 Torly.ai. All rights reserved.