Context Optimizer
Advanced context management with auto-compaction and dynamic context optimization for DeepSeek's 64k context window. Features intelligent compaction (merging, summarizing, extracting), query-aware rel
Advanced context management with auto-compaction and dynamic context optimization for DeepSeek's 64k context window. Features intelligent compaction (merging, summarizing, extracting), query-aware rel
Real data. Real impact.
Emerging
Developers
Per week
Open source
Skills give you superpowers. Install in 30 seconds.
Advanced context management optimized for DeepSeek's 64k context window. Provides intelligent pruning, compression, and token optimization to prevent context overflow while preserving important information.
import { createContextPruner } from './lib/index.js';const pruner = createContextPruner({ contextLimit: 64000, // DeepSeek's limit autoCompact: true, // Enable automatic compaction dynamicContext: true, // Enable dynamic relevance-based context strategies: ['semantic', 'temporal', 'extractive', 'adaptive'], queryAwareCompaction: true, // Compact based on current query relevance });
await pruner.initialize();
// Process messages with auto-compaction and dynamic context const processed = await pruner.processMessages(messages, currentQuery);
// Get context health status const status = pruner.getStatus(); console.log(
);Context health: ${status.health}, Relevance scores: ${status.relevanceScores}// Manual compaction when needed const compacted = await pruner.autoCompact(messages, currentQuery);
// When something isn't in current context, search archive const archiveResult = await pruner.retrieveFromArchive('query about previous conversation', { maxContextTokens: 1000, minRelevance: 0.4, });if (archiveResult.found) { // Add relevant snippets to current context const archiveContext = archiveResult.snippets.join('\n\n'); // Use archiveContext in your prompt console.log(); console.log(Found ${archiveResult.sources.length} relevant sources); }Retrieved ${archiveResult.totalTokens} tokens from archive
The context archive provides a RAM vs Storage approach:
{ contextLimit: 64000, // DeepSeek's context window autoCompact: true, // Enable automatic compaction compactThreshold: 0.75, // Start compacting at 75% usage aggressiveCompactThreshold: 0.9, // Aggressive compaction at 90%dynamicContext: true, // Enable dynamic context management relevanceDecay: 0.95, // Relevance decays 5% per time step minRelevanceScore: 0.3, // Minimum relevance to keep queryAwareCompaction: true, // Compact based on current query relevance
strategies: ['semantic', 'temporal', 'extractive', 'adaptive'], preserveRecent: 10, // Always keep last N messages preserveSystem: true, // Always keep system messages minSimilarity: 0.85, // Semantic similarity threshold
// Archive settings enableArchive: true, // Enable hierarchical memory system archivePath: './context-archive', archiveSearchLimit: 10, archiveMaxSize: 100 * 1024 * 1024, // 100MB archiveIndexing: true,
// Chat logging logToChat: true, // Log optimization events to chat chatLogLevel: 'brief', // 'brief', 'detailed', or 'none' chatLogFormat: '📊 {action}: {details}', // Format for chat messages
// Performance batchSize: 5, // Messages to process in batch maxCompactionRatio: 0.5, // Maximum 50% compaction in one pass }
The context optimizer can log events directly to chat:
// Example chat log messages: // 📊 Context optimized: Compacted 15 messages → 8 (47% reduction) // 📊 Archive search: Found 3 relevant snippets (42% similarity) // 📊 Dynamic context: Filtered 12 low-relevance messages// Configure logging: const pruner = createContextPruner({ logToChat: true, chatLogLevel: 'brief', // Options: 'brief', 'detailed', 'none' chatLogFormat: '📊 {action}: {details}',
// Custom log handler (optional) onLog: (level, message, data) => { if (level === 'info' && data.action === 'compaction') { // Send to chat console.log(); } } });🧠 Context optimized: ${message}
Add to your Clawdbot config:
skills: context-pruner: enabled: true config: contextLimit: 64000 autoPrune: true
The pruner will automatically monitor context usage and apply appropriate pruning strategies to stay within DeepSeek's 64k limit.
No automatic installation available. Please visit the source repository for installation instructions.
View Installation Instructions1,500+ AI skills, agents & workflows. Install in 30 seconds. Part of the Torly.ai family.
© 2026 Torly.ai. All rights reserved.