subagent-driven-development
Use when executing implementation plans with independent tasks. Dispatches fresh delegate_task per task with two-stage review (spec compliance then code quality).
Use when executing implementation plans with independent tasks. Dispatches fresh delegate_task per task with two-stage review (spec compliance then code quality).
Real data. Real impact.
Emerging
Developers
Per week
Excellent
Skills give you superpowers. Install in 30 seconds.
Execute implementation plans by dispatching fresh subagents per task with systematic two-stage review.
Core principle: Fresh subagent per task + two-stage review (spec then quality) = high quality, fast iteration.
Use this skill when:
vs. manual execution:
Read the plan file. Extract ALL tasks with their full text and context upfront. Create a todo list:
# Read the plan read_file("docs/plans/feature-plan.md") # Create todo list with all tasks todo([ {"id": "task-1", "content": "Create User model with email field", "status": "pending"}, {"id": "task-2", "content": "Add password hashing utility", "status": "pending"}, {"id": "task-3", "content": "Create login endpoint", "status": "pending"}, ])
Key: Read the plan ONCE. Extract everything. Don't make subagents read the plan file — provide the full task text directly in context.
For EACH task in the plan:
Use
delegate_task with complete context:
delegate_task( goal="Implement Task 1: Create User model with email and password_hash fields", context=""" TASK FROM PLAN: - Create: src/models/user.py - Add User class with email (str) and password_hash (str) fields - Use bcrypt for password hashing - Include __repr__ for debugging FOLLOW TDD: 1. Write failing test in tests/models/test_user.py 2. Run: pytest tests/models/test_user.py -v (verify FAIL) 3. Write minimal implementation 4. Run: pytest tests/models/test_user.py -v (verify PASS) 5. Run: pytest tests/ -q (verify no regressions) 6. Commit: git add -A && git commit -m "feat: add User model with password hashing" PROJECT CONTEXT: - Python 3.11, Flask app in src/app.py - Existing models in src/models/ - Tests use pytest, run from project root - bcrypt already in requirements.txt """, toolsets=['terminal', 'file'] )
After the implementer completes, verify against the original spec:
delegate_task( goal="Review if implementation matches the spec from the plan", context=""" ORIGINAL TASK SPEC: - Create src/models/user.py with User class - Fields: email (str), password_hash (str) - Use bcrypt for password hashing - Include __repr__ CHECK: - [ ] All requirements from spec implemented? - [ ] File paths match spec? - [ ] Function signatures match spec? - [ ] Behavior matches expected? - [ ] Nothing extra added (no scope creep)? OUTPUT: PASS or list of specific spec gaps to fix. """, toolsets=['file'] )
If spec issues found: Fix gaps, then re-run spec review. Continue only when spec-compliant.
After spec compliance passes:
delegate_task( goal="Review code quality for Task 1 implementation", context=""" FILES TO REVIEW: - src/models/user.py - tests/models/test_user.py CHECK: - [ ] Follows project conventions and style? - [ ] Proper error handling? - [ ] Clear variable/function names? - [ ] Adequate test coverage? - [ ] No obvious bugs or missed edge cases? - [ ] No security issues? OUTPUT FORMAT: - Critical Issues: [must fix before proceeding] - Important Issues: [should fix] - Minor Issues: [optional] - Verdict: APPROVED or REQUEST_CHANGES """, toolsets=['file'] )
If quality issues found: Fix issues, re-review. Continue only when approved.
todo([{"id": "task-1", "content": "Create User model with email field", "status": "completed"}], merge=True)
After ALL tasks are complete, dispatch a final integration reviewer:
delegate_task( goal="Review the entire implementation for consistency and integration issues", context=""" All tasks from the plan are complete. Review the full implementation: - Do all components work together? - Any inconsistencies between tasks? - All tests passing? - Ready for merge? """, toolsets=['terminal', 'file'] )
# Run full test suite pytest tests/ -q # Review all changes git diff --stat # Final commit if needed git add -A && git commit -m "feat: complete [feature name] implementation"
Each task = 2-5 minutes of focused work.
Too big:
Right size:
Why fresh subagent per task:
Why two-stage review:
Cost trade-off:
This skill EXECUTES plans created by the writing-plans skill:
Implementer subagents should follow TDD:
Include TDD instructions in every implementer context.
The two-stage review process IS the code review. For final integration review, use the requesting-code-review skill's review dimensions.
If a subagent encounters bugs during implementation:
[Read plan: docs/plans/auth-feature.md] [Create todo list with 5 tasks] --- Task 1: Create User model --- [Dispatch implementer subagent] Implementer: "Should email be unique?" You: "Yes, email must be unique" Implementer: Implemented, 3/3 tests passing, committed. [Dispatch spec reviewer] Spec reviewer: ✅ PASS — all requirements met [Dispatch quality reviewer] Quality reviewer: ✅ APPROVED — clean code, good tests [Mark Task 1 complete] --- Task 2: Password hashing --- [Dispatch implementer subagent] Implementer: No questions, implemented, 5/5 tests passing. [Dispatch spec reviewer] Spec reviewer: ❌ Missing: password strength validation (spec says "min 8 chars") [Implementer fixes] Implementer: Added validation, 7/7 tests passing. [Dispatch spec reviewer again] Spec reviewer: ✅ PASS [Dispatch quality reviewer] Quality reviewer: Important: Magic number 8, extract to constant Implementer: Extracted MIN_PASSWORD_LENGTH constant Quality reviewer: ✅ APPROVED [Mark Task 2 complete] ... (continue for all tasks) [After all tasks: dispatch final integration reviewer] [Run full test suite: all passing] [Done!]
Fresh subagent per task Two-stage review every time Spec compliance FIRST Code quality SECOND Never skip reviews Catch issues early
Quality is not an accident. It's the result of systematic process.
MIT
mkdir -p ~/.hermes/skills/software-development/subagent-driven-development && curl -o ~/.hermes/skills/software-development/subagent-driven-development/SKILL.md https://raw.githubusercontent.com/NousResearch/hermes-agent/main/skills/software-development/subagent-driven-development/SKILL.md1,500+ AI skills, agents & workflows. Install in 30 seconds. Part of the Torly.ai family.
© 2026 Torly.ai. All rights reserved.