Reflect
Self-improvement through conversation analysis. Extracts learnings from corrections and success patterns, proposes updates to agent files or creates new skil...
Self-improvement through conversation analysis. Extracts learnings from corrections and success patterns, proposes updates to agent files or creates new skil...
Real data. Real impact.
Emerging
Developers
Per week
Open source
Skills give you superpowers. Install in 30 seconds.
| Command | Action |
|---|---|
| Analyze conversation for learnings |
| Enable auto-reflection |
| Disable auto-reflection |
| Show state and metrics |
| Review low-confidence learnings |
| Focus on specific agent |
"Correct once, never again."
When users correct behavior, those corrections become permanent improvements encoded into the agent system - across all future sessions.
Check and initialize state files using the state manager:
# Check for existing state python scripts/state_manager.py initState directory is configurable via REFLECT_STATE_DIR env var
Default: ~/.reflect/ (portable) or ~/.claude/session/ (Claude Code)
State includes:
reflect-state.yaml - Toggle state, pending reviewsreflect-metrics.yaml - Aggregate metricslearnings.yaml - Log of all applied learningsUse the signal detector to identify learnings:
python scripts/signal_detector.py --input conversation.txt
| Confidence | Triggers | Examples |
|---|---|---|
| HIGH | Explicit corrections | "never", "always", "wrong", "stop", "the rule is" |
| MEDIUM | Approved approaches | "perfect", "exactly", accepted output |
| LOW | Observations | Patterns that worked, not validated |
See signal_patterns.md for full detection rules.
Map each signal to the appropriate target:
Learning Categories:
| Category | Target Files |
|---|---|
| Code Style | , , |
| Architecture | , , |
| Process | , orchestrator agents |
| Domain | Domain-specific agents, |
| Tools | , relevant specialists |
| New Skill | |
See agent_mappings.md for mapping rules.
Some learnings should become new skills rather than agent updates:
Skill-Worthy Criteria:
Quality Gates (must pass all):
See skill_template.md for skill creation guidelines.
Produce output in this format:
# Reflection AnalysisSession Context
- Date: [timestamp]
- Messages Analyzed: [count]
- Focus: [all agents OR specific agent name]
Signals Detected
# Signal Confidence Source Quote Category 1 [learning] HIGH "[exact words]" Code Style 2 [learning] MEDIUM "[context]" Architecture Proposed Agent Updates
Change 1: Update [agent-name]
Target:
Section: [section name] Confidence: [HIGH/MEDIUM/LOW] Rationale: [why this change][file path]--- a/path/to/agent.md +++ b/path/to/agent.md @@ -82,6 +82,7 @@ ## Section * Existing rule +* New rule from learning </code></pre> <h2>Proposed New Skills</h2> <h3>Skill 1: [skill-name]</h3> <p><strong>Quality Gate Check</strong>:</p> <ul class="contains-task-list"> <li class="task-list-item"><input type="checkbox" disabled="" checked=""/> <!-- -->Reusable: [why]</li> <li class="task-list-item"><input type="checkbox" disabled="" checked=""/> <!-- -->Non-trivial: [why]</li> <li class="task-list-item"><input type="checkbox" disabled="" checked=""/> <!-- -->Specific: [trigger conditions]</li> <li class="task-list-item"><input type="checkbox" disabled="" checked=""/> <!-- -->Verified: [how verified]</li> <li class="task-list-item"><input type="checkbox" disabled="" checked=""/> <!-- -->No duplication: [checked against]</li> </ul> <p><strong>Will create</strong>: <code>.claude/skills/[skill-name]/SKILL.md</code></p> <h2>Conflict Check</h2> <ul class="contains-task-list"> <li class="task-list-item"><input type="checkbox" disabled="" checked=""/> <!-- -->No conflicts with existing rules detected</li> <li>OR: Warning - potential conflict with [file:line]</li> </ul> <h2>Commit Message</h2> <pre><code>reflect: add learnings from session [date] Agent updates: - [learning 1 summary] New skills: - [skill-name]: [brief description] Extracted: [N] signals ([H] high, [M] medium, [L] low confidence) </code></pre> <h2>Review Prompt</h2> <p>Apply these changes?</p> <ul> <li><code>Y</code> - Apply all changes and commit</li> <li><code>N</code> - Discard all changes</li> <li><code>modify</code> - Adjust specific changes</li> <li><code>1,3</code> - Apply only changes 1 and 3</li> <li><code>s1</code> - Apply only skill 1</li> <li><code>all-skills</code> - Apply all skills, skip agent updates</li> </ul> <pre><code> ### Step 6: Handle User Response **On `Y` (approve):** 1. Apply each change using Edit tool 2. Run `git add` on modified files 3. Commit with generated message 4. Update learnings log 5. Update metrics **On `N` (reject):** 1. Discard proposed changes 2. Log rejection for analysis 3. Ask if user wants to modify any signals **On `modify`:** 1. Present each change individually 2. Allow editing the proposed addition 3. Reconfirm before applying **On selective (e.g., `1,3`):** 1. Apply only specified changes 2. Log partial acceptance 3. Commit only applied changes ### Step 7: Update Metrics ```bash python scripts/metrics_updater.py --accepted 3 --rejected 1 --confidence high:2,medium:1 </code></pre> <h2>Toggle Commands</h2> <h3>Enable Auto-Reflection</h3> <pre><code class="language-bash">/reflect on # Sets auto_reflect: true in state file # Will trigger on PreCompact hook </code></pre> <h3>Disable Auto-Reflection</h3> <pre><code class="language-bash">/reflect off # Sets auto_reflect: false in state file </code></pre> <h3>Check Status</h3> <pre><code class="language-bash">/reflect status # Shows current state and metrics </code></pre> <h3>Review Pending</h3> <pre><code class="language-bash">/reflect review # Shows low-confidence learnings awaiting validation </code></pre> <h2>Output Locations</h2> <p><strong>Project-level (versioned with repo):</strong></p> <ul> <li><code>.claude/reflections/YYYY-MM-DD_HH-MM-SS.md</code> - Full reflection</li> <li><code>.claude/reflections/index.md</code> - Project summary</li> <li><code>.claude/skills/{name}/SKILL.md</code> - New skills</li> </ul> <p><strong>Global (user-level):</strong></p> <ul> <li><code>~/.claude/reflections/by-project/{project}/</code> - Cross-project</li> <li><code>~/.claude/reflections/by-agent/{agent}/learnings.md</code> - Per-agent</li> <li><code>~/.claude/reflections/index.md</code> - Global summary</li> </ul> <h2>Memory Integration</h2> <p>Some learnings belong in <strong>auto-memory</strong> (<code>~/.claude/projects/*/memory/MEMORY.md</code>) rather than agent files:</p> <table><thead><tr><th>Learning Type</th><th>Best Target</th></tr></thead><tbody><tr><td>Behavioral correction ("always do X")</td><td>Agent file</td></tr><tr><td>Project-specific pattern</td><td>MEMORY.md</td></tr><tr><td>Recurring bug/workaround</td><td>New skill OR MEMORY.md</td></tr><tr><td>Tool preference</td><td>CLAUDE.md</td></tr><tr><td>Domain knowledge</td><td>MEMORY.md or compound-docs</td></tr></tbody></table> <p>When a signal is LOW confidence and project-specific, prefer writing to MEMORY.md over modifying agents.</p> <h2>Safety Guardrails</h2> <h3>Human-in-the-Loop</h3> <ul> <li>NEVER apply changes without explicit user approval</li> <li>Always show full diff before applying</li> <li>Allow selective application</li> </ul> <h3>Git Versioning</h3> <ul> <li>All changes committed with descriptive messages</li> <li>Easy rollback via <code>git revert</code></li> <li>Learning history preserved</li> </ul> <h3>Incremental Updates</h3> <ul> <li>ONLY add to existing sections</li> <li>NEVER delete or rewrite existing rules</li> <li>Preserve original structure</li> </ul> <h3>Conflict Detection</h3> <ul> <li>Check if proposed rule contradicts existing</li> <li>Warn user if conflict detected</li> <li>Suggest resolution strategy</li> </ul> <h2>Integration</h2> <h3>With /handover</h3> <p>If auto-reflection is enabled, PreCompact hook triggers reflection before handover.</p> <h3>With Session Health</h3> <p>At 70%+ context (Yellow status), reminders to run <code>/reflect</code> are injected.</p> <h3>Hook Integration (Claude Code)</h3> <p>The skill includes hook scripts for automatic integration:</p> <pre><code class="language-bash"># Install hook to your Claude hooks directory cp hooks/precompact_reflect.py ~/.claude/hooks/ </code></pre> <p>Configure in <code>~/.claude/settings.json</code>:</p> <pre><code class="language-json">{ "hooks": { "PreCompact": [ { "hooks": [ { "type": "command", "command": "uv run ~/.claude/hooks/precompact_reflect.py --auto" } ] } ] } } </code></pre> <p>See <a href="hooks/README.md">hooks/README.md</a> for full configuration options.</p> <h2>Portability</h2> <p>This skill works with any LLM tool that supports:</p> <ul> <li>File read/write operations</li> <li>Text pattern matching</li> <li>Git operations (optional, for commits)</li> </ul> <h3>Configurable State Location</h3> <pre><code class="language-bash"># Set custom state directory export REFLECT_STATE_DIR=/path/to/state # Or use default # ~/.reflect/ (portable default) # ~/.claude/session/ (Claude Code default) </code></pre> <h3>No Task Tool Dependency</h3> <p>Unlike the previous agent-based approach, this skill executes directly without spawning subagents. The LLM reads SKILL.md and follows the workflow.</p> <h3>Git Operations Optional</h3> <p>Commits are wrapped with availability checks - if not in a git repo, changes are still saved but not committed.</p> <h2>Troubleshooting</h2> <p><strong>No signals detected:</strong></p> <ul> <li>Session may not have had corrections</li> <li>Try <code>/reflect review</code> to check pending items</li> </ul> <p><strong>Conflict warning:</strong></p> <ul> <li>Review the existing rule cited</li> <li>Decide if new rule should override</li> <li>Can modify before applying</li> </ul> <p><strong>Agent file not found:</strong></p> <ul> <li>Check agent name spelling</li> <li>Use <code>/reflect status</code> to see available targets</li> <li>May need to create agent file first</li> </ul> <h2>File Structure</h2> <pre><code>reflect/ ├── SKILL.md # This file ├── scripts/ │ ├── state_manager.py # State file CRUD │ ├── signal_detector.py # Pattern matching │ ├── metrics_updater.py # Metrics aggregation │ └── output_generator.py # Reflection file & index generation ├── hooks/ │ ├── precompact_reflect.py # PreCompact hook integration │ ├── settings-snippet.json # Settings.json examples │ └── README.md # Hook configuration guide ├── references/ │ ├── signal_patterns.md # Detection rules │ ├── agent_mappings.md # Target mappings │ └── skill_template.md # Skill generation └── assets/ ├── reflection_template.md # Output template └── learnings_schema.yaml # Schema definition </code></pre>
No automatic installation available. Please visit the source repository for installation instructions.
View Installation Instructions1,500+ AI skills, agents & workflows. Install in 30 seconds. Part of the Torly.ai family.
© 2026 Torly.ai. All rights reserved.