Self Improving Agent 1.0.1
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'),
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'),
Real data. Real impact.
Emerging
Developers
Per week
Open source
Skills give you superpowers. Install in 30 seconds.
Log learnings and errors to markdown files for continuous improvement. Coding agents can later process these into fixes, and important learnings get promoted to project memory.
| Situation | Action |
|---|---|
| Command/operation fails | Log to |
| User corrects you | Log to with category |
| User wants missing feature | Log to |
| API/external tool fails | Log to with integration details |
| Knowledge was outdated | Log to with category |
| Found better approach | Log to with category |
| Similar to existing entry | Link with , consider priority bump |
| Broadly applicable learning | Promote to , , and/or |
Create
.learnings/ directory in project root if it doesn't exist:
mkdir -p .learnings
Copy templates from
assets/ or create files with headers.
Append to
.learnings/LEARNINGS.md:
## [LRN-YYYYMMDD-XXX] categoryLogged: ISO-8601 timestamp Priority: low | medium | high | critical Status: pending Area: frontend | backend | infra | tests | docs | config
Summary
One-line description of what was learned
Details
Full context: what happened, what was wrong, what's correct
Suggested Action
Specific fix or improvement to make
Metadata
- Source: conversation | error | user_feedback
- Related Files: path/to/file.ext
- Tags: tag1, tag2
- See Also: LRN-20250110-001 (if related to existing entry)
Append to
.learnings/ERRORS.md:
## [ERR-YYYYMMDD-XXX] skill_or_command_nameLogged: ISO-8601 timestamp Priority: high Status: pending Area: frontend | backend | infra | tests | docs | config
Summary
Brief description of what failed
Error
Actual error message or output
### Context - Command/operation attempted - Input or parameters used - Environment details if relevantSuggested Fix
If identifiable, what might resolve this
Metadata
- Reproducible: yes | no | unknown
- Related Files: path/to/file.ext
- See Also: ERR-20250110-001 (if recurring)
Append to
.learnings/FEATURE_REQUESTS.md:
## [FEAT-YYYYMMDD-XXX] capability_nameLogged: ISO-8601 timestamp Priority: medium Status: pending Area: frontend | backend | infra | tests | docs | config
Requested Capability
What the user wanted to do
User Context
Why they needed it, what problem they're solving
Complexity Estimate
simple | medium | complex
Suggested Implementation
How this could be built, what it might extend
Metadata
- Frequency: first_time | recurring
- Related Features: existing_feature_name
Format:
TYPE-YYYYMMDD-XXX
LRN (learning), ERR (error), FEAT (feature)001, A7B)Examples:
LRN-20250115-001, ERR-20250115-A3F, FEAT-20250115-002
When an issue is fixed, update the entry:
**Status**: pending → **Status**: resolved### Resolution - **Resolved**: 2025-01-16T09:00:00Z - **Commit/PR**: abc123 or #42 - **Notes**: Brief description of what was done
Other status values:
in_progress - Actively being worked onwont_fix - Decided not to address (add reason in Resolution notes)promoted - Elevated to CLAUDE.md, AGENTS.md, or .github/copilot-instructions.mdWhen a learning is broadly applicable (not a one-off fix), promote it to permanent project memory.
| Target | What Belongs There |
|---|---|
| Project facts, conventions, gotchas for all Claude interactions |
| Agent-specific workflows, tool usage patterns, automation rules |
| Project context and conventions for GitHub Copilot |
**Status**: pending → **Status**: promoted**Promoted**: CLAUDE.md, AGENTS.md, or .github/copilot-instructions.mdLearning (verbose):
Project uses pnpm workspaces. Attempted
but failed. Lock file isnpm install. Must usepnpm-lock.yaml.pnpm install
In CLAUDE.md (concise):
## Build & Dependencies - Package manager: pnpm (not npm) - use `pnpm install`
Learning (verbose):
When modifying API endpoints, must regenerate TypeScript client. Forgetting this causes type mismatches at runtime.
In AGENTS.md (actionable):
## After API Changes 1. Regenerate client: `pnpm run generate:api` 2. Check for type errors: `pnpm tsc --noEmit`
If logging something similar to an existing entry:
grep -r "keyword" .learnings/**See Also**: ERR-20250110-001 in MetadataReview
.learnings/ at natural breakpoints:
# Count pending items grep -h "Status\*\*: pending" .learnings/*.md | wc -lList pending high-priority items
grep -B5 "Priority**: high" .learnings/*.md | grep "^## ["
Find learnings for a specific area
grep -l "Area**: backend" .learnings/*.md
Automatically log when you notice:
Corrections (→ learning with
correction category):
Feature Requests (→ feature request):
Knowledge Gaps (→ learning with
knowledge_gap category):
Errors (→ error entry):
| Priority | When to Use |
|---|---|
| Blocks core functionality, data loss risk, security issue |
| Significant impact, affects common workflows, recurring issue |
| Moderate impact, workaround exists |
| Minor inconvenience, edge case, nice-to-have |
Use to filter learnings by codebase region:
| Area | Scope |
|---|---|
| UI, components, client-side code |
| API, services, server-side code |
| CI/CD, deployment, Docker, cloud |
| Test files, testing utilities, coverage |
| Documentation, comments, READMEs |
| Configuration files, environment, settings |
Keep learnings local (per-developer):
.learnings/
Track learnings in repo (team-wide): Don't add to .gitignore - learnings become shared knowledge.
Hybrid (track templates, ignore entries):
.learnings/*.md !.learnings/.gitkeep
Enable automatic reminders through agent hooks. This is opt-in - you must explicitly configure hooks.
Create
.claude/settings.json in your project:
{ "hooks": { "UserPromptSubmit": [{ "matcher": "", "hooks": [{ "type": "command", "command": "./skills/self-improvement/scripts/activator.sh" }] }] } }
This injects a learning evaluation reminder after each prompt (~50-100 tokens overhead).
{ "hooks": { "UserPromptSubmit": [{ "matcher": "", "hooks": [{ "type": "command", "command": "./skills/self-improvement/scripts/activator.sh" }] }], "PostToolUse": [{ "matcher": "Bash", "hooks": [{ "type": "command", "command": "./skills/self-improvement/scripts/error-detector.sh" }] }] } }
| Script | Hook Type | Purpose |
|---|---|---|
| UserPromptSubmit | Reminds to evaluate learnings after tasks |
| PostToolUse (Bash) | Triggers on command errors |
See
references/hooks-setup.md for detailed configuration and troubleshooting.
When a learning is valuable enough to become a reusable skill, extract it using the provided helper.
A learning qualifies for skill extraction when ANY of these apply:
| Criterion | Description |
|---|---|
| Recurring | Has links to 2+ similar issues |
| Verified | Status is with working fix |
| Non-obvious | Required actual debugging/investigation to discover |
| Broadly applicable | Not project-specific; useful across codebases |
| User-flagged | User says "save this as a skill" or similar |
./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run ./skills/self-improvement/scripts/extract-skill.sh skill-name
promoted_to_skill, add Skill-PathIf you prefer manual creation:
skills/<skill-name>/SKILL.mdassets/SKILL-TEMPLATE.mdname and descriptionWatch for these signals that a learning should become a skill:
In conversation:
In learning entries:
See Also links (recurring issue)best_practice with broad applicabilityBefore extraction, verify:
This skill works across different AI coding agents with agent-specific activation.
Activation: Hooks (UserPromptSubmit, PostToolUse) Setup:
.claude/settings.json with hook configuration
Detection: Automatic via hook scripts
Activation: Hooks (same pattern as Claude Code) Setup:
.codex/settings.json with hook configuration
Detection: Automatic via hook scripts
Activation: Manual (no hook support) Setup: Add to
.github/copilot-instructions.md:
## Self-ImprovementAfter solving non-obvious issues, consider logging to
:.learnings/
- Use format from self-improvement skill
- Link related entries with See Also
- Promote high-value learnings to skills
Ask in chat: "Should I log this as a learning?"
Detection: Manual review at session end
Regardless of agent, apply self-improvement when you:
For Copilot users, add this to your prompts when relevant:
After completing this task, evaluate if any learnings should be logged to
using the self-improvement skill format..learnings/
Or use quick prompts:
No automatic installation available. Please visit the source repository for installation instructions.
View Installation Instructions1,500+ AI skills, agents & workflows. Install in 30 seconds. Part of the Torly.ai family.
© 2026 Torly.ai. All rights reserved.