Elite Longterm Memory
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Real data. Real impact.
Emerging
Developers
Per week
Open source
Skills give you superpowers. Install in 30 seconds.
The ultimate memory system for AI agents. Combines 6 proven approaches into one bulletproof architecture.
Never lose context. Never forget decisions. Never repeat mistakes.
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā ELITE LONGTERM MEMORY ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā⤠ā ā ā āāāāāāāāāāāāāāā āāāāāāāāāāāāāāā āāāāāāāāāāāāāāā ā ā ā HOT RAM ā ā WARM STORE ā ā COLD STORE ā ā ā ā ā ā ā ā ā ā ā ā SESSION- ā ā LanceDB ā ā Git-Notes ā ā ā ā STATE.md ā ā Vectors ā ā Knowledge ā ā ā ā ā ā ā ā Graph ā ā ā ā (survives ā ā (semantic ā ā (permanent ā ā ā ā compaction)ā ā search) ā ā decisions) ā ā ā āāāāāāāāāāāāāāā āāāāāāāāāāāāāāā āāāāāāāāāāāāāāā ā ā ā ā ā ā ā āāāāāāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāā ā ā ā¼ ā ā āāāāāāāāāāāāāāā ā ā ā MEMORY.md ā ā Curated long-term ā ā ā + daily/ ā (human-readable) ā ā āāāāāāāāāāāāāāā ā ā ā ā ā ā¼ ā ā āāāāāāāāāāāāāāā ā ā ā SuperMemory ā ā Cloud backup (optional) ā ā ā API ā ā ā āāāāāāāāāāāāāāā ā ā ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
From: bulletproof-memory
Active working memory that survives compaction. Write-Ahead Log protocol.
# SESSION-STATE.md ā Active Working Memory ## Current Task [What we're working on RIGHT NOW] ## Key Context - User preference: ... - Decision made: ... - Blocker: ... ## Pending Actions - [ ] ...
Rule: Write BEFORE responding. Triggered by user input, not agent memory.
From: lancedb-memory
Semantic search across all memories. Auto-recall injects relevant context.
# Auto-recall (happens automatically) memory_recall query="project status" limit=5 # Manual store memory_store text="User prefers dark mode" category="preference" importance=0.9
From: git-notes-memory
Structured decisions, learnings, and context. Branch-aware.
# Store a decision (SILENT - never announce) python3 memory.py -p $DIR remember '{"type":"decision","content":"Use React for frontend"}' -t tech -i h # Retrieve context python3 memory.py -p $DIR get "frontend"
From: OpenClaw native
Human-readable long-term memory. Daily logs + distilled wisdom.
workspace/ āāā MEMORY.md # Curated long-term (the good stuff) āāā memory/ āāā 2026-01-30.md # Daily log āāā 2026-01-29.md āāā topics/ # Topic-specific files
From: supermemory
Cross-device sync. Chat with your knowledge base.
export SUPERMEMORY_API_KEY="your-key" supermemory add "Important context" supermemory search "what did we decide about..."
NEW: Automatic fact extraction
Mem0 automatically extracts facts from conversations. 80% token reduction.
npm install mem0ai export MEM0_API_KEY="your-key"
const { MemoryClient } = require('mem0ai'); const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY }); // Conversations auto-extract facts await client.add(messages, { user_id: "user123" }); // Retrieve relevant memories const memories = await client.search(query, { user_id: "user123" });
Benefits:
cat > SESSION-STATE.md << 'EOF' # SESSION-STATE.md ā Active Working Memory This file is the agent's "RAM" ā survives compaction, restarts, distractions. ## Current Task [None] ## Key Context [None yet] ## Pending Actions - [ ] None ## Recent Decisions [None yet] --- *Last updated: [timestamp]* EOF
In ~/.openclaw/openclaw.json:
{ "memorySearch": { "enabled": true, "provider": "openai", "sources": ["memory"], "minScore": 0.3, "maxResults": 10 }, "plugins": { "entries": { "memory-lancedb": { "enabled": true, "config": { "autoCapture": false, "autoRecall": true, "captureCategories": ["preference", "decision", "fact"], "minImportance": 0.7 } } } } }
cd ~/clawd git init # if not already python3 skills/git-notes-memory/memory.py -p . sync --start
# Ensure you have: # - MEMORY.md in workspace root # - memory/ folder for daily logs mkdir -p memory
export SUPERMEMORY_API_KEY="your-key" # Add to ~/.zshrc for persistence
Write-Ahead Log: Write state BEFORE responding, not after.
| Trigger | Action |
|---|---|
| User states preference | Write to SESSION-STATE.md ā then respond |
| User makes decision | Write to SESSION-STATE.md ā then respond |
| User gives deadline | Write to SESSION-STATE.md ā then respond |
| User corrects you | Write to SESSION-STATE.md ā then respond |
Why? If you respond first and crash/compact before saving, context is lost. WAL ensures durability.
User: "Let's use Tailwind for this project, not vanilla CSS" Agent (internal): 1. Write to SESSION-STATE.md: "Decision: Use Tailwind, not vanilla CSS" 2. Store in Git-Notes: decision about CSS framework 3. memory_store: "User prefers Tailwind over vanilla CSS" importance=0.9 4. THEN respond: "Got it ā Tailwind it is..."
# Audit vector memory memory_recall query="*" limit=50 # Clear all vectors (nuclear option) rm -rf ~/.openclaw/memory/lancedb/ openclaw gateway restart # Export Git-Notes python3 memory.py -p . export --format json > memories.json # Check memory health du -sh ~/.openclaw/memory/ wc -l MEMORY.md ls -la memory/
Understanding the root causes helps you fix them:
| Failure Mode | Cause | Fix |
|---|---|---|
| Forgets everything | memory_search disabled | Enable + add OpenAI key |
| Files not loaded | Agent skips reading memory | Add to AGENTS.md rules |
| Facts not captured | No auto-extraction | Use Mem0 or manual logging |
| Sub-agents isolated | Don't inherit context | Pass context in task prompt |
| Repeats mistakes | Lessons not logged | Write to memory/lessons.md |
If you have an OpenAI key, enable semantic search:
openclaw configure --section web
This enables vector search over MEMORY.md + memory/*.md files.
Auto-extract facts from conversations. 80% token reduction.
npm install mem0ai
const { MemoryClient } = require('mem0ai'); const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY }); // Auto-extract and store await client.add([ { role: "user", content: "I prefer Tailwind over vanilla CSS" } ], { user_id: "ty" }); // Retrieve relevant memories const memories = await client.search("CSS preferences", { user_id: "ty" });
memory/ āāā projects/ ā āāā strykr.md ā āāā taska.md āāā people/ ā āāā contacts.md āāā decisions/ ā āāā 2026-01.md āāā lessons/ ā āāā mistakes.md āāā preferences.md
Keep MEMORY.md as a summary (<5KB), link to detailed files.
| Problem | Fix |
|---|---|
| Forgets preferences | Add ## Preferences section to MEMORY.md |
| Repeats mistakes | Log every mistake to memory/lessons.md |
| Sub-agents lack context | Include key context in spawn task prompt |
| Forgets recent work | Strict daily file discipline |
| Memory search not working | Check OPENAI_API_KEY is set |
Agent keeps forgetting mid-conversation: ā SESSION-STATE.md not being updated. Check WAL protocol.
Irrelevant memories injected: ā Disable autoCapture, increase minImportance threshold.
Memory too large, slow recall: ā Run hygiene: clear old vectors, archive daily logs.
Git-Notes not persisting: ā Run git notes push to sync with remote.
memory_search returns nothing: ā Check OpenAI API key: echo $OPENAI_API_KEY ā Verify memorySearch enabled in openclaw.json
Built by @NextXFrontier ā Part of the Next Frontier AI toolkit
MIT-0 (Free to use, modify, and redistribute. No a
No automatic installation available. Please visit the source repository for installation instructions.
View Installation Instructions1,500+ AI skills, agents & workflows. Install in 30 seconds. Part of the Torly.ai family.
Ā© 2026 Torly.ai. All rights reserved.