Subagent Patterns in Claude Code: Parallel Task Execution
Master subagent orchestration patterns for parallel task execution in Claude Code. Learn fan-out, pipeline, and specialist patterns with practical examples.
Subagent Patterns in Claude Code: Parallel Task Execution
When a single agent isn't enough, subagents multiply your capabilities. They enable parallel task execution, specialized expertise, and complex workflows that would be impossible with a single thread of execution.
This guide covers subagent patterns—from basic fan-out to sophisticated orchestration—with practical examples you can implement today.
What Are Subagents?
Subagents are specialized AI instances that the primary Claude Code agent spawns for specific tasks. Think of them as workers in a distributed system:
┌─────────────────────────────────────────────────┐
│ Primary Agent │
│ (Orchestrator/Coordinator) │
└─────────────┬──────────┬──────────┬─────────────┘
│ │ │
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│Subagent │ │Subagent │ │Subagent │
│ A │ │ B │ │ C │
│(Security)│ │(Perf) │ │(Docs) │
└─────────┘ └─────────┘ └─────────┘
Each subagent:
- Has its own context and focus
- Can work in parallel with others
- Returns results to the primary agent
- May have specialized tools or permissions
Why Use Subagents?
1. Parallelization
Sequential tasks take N times longer. Parallel subagents divide that time:
Sequential:
Task A (10s) → Task B (10s) → Task C (10s) = 30s total
Parallel:
Task A (10s) ─┐
Task B (10s) ─┼→ 10s total
Task C (10s) ─┘
2. Specialization
Different tasks need different expertise:
Security Audit: Needs OWASP knowledge, vulnerability patterns
Performance Review: Needs profiling knowledge, optimization patterns
Documentation: Needs technical writing, API documentation skills
One agent trying to do all three is less effective than three specialists.
3. Context Isolation
Each subagent has fresh context:
- No pollution from unrelated tasks
- Focused attention on specific goals
- Cleaner, more reliable results
4. Scalability
Need to analyze 100 files? Spawn 10 subagents, each handling 10 files:
Without subagents: 100 files × 30s = 50 minutes
With 10 subagents: 10 files × 30s = 5 minutes
Core Subagent Patterns
Pattern 1: Fan-Out / Fan-In
The most common pattern. Distribute work to multiple subagents, then collect results.
┌──────────┐
│ Primary │
│ Agent │
└────┬─────┘
│ Fan-Out
┌────────┼────────┐
▼ ▼ ▼
┌───────┐┌───────┐┌───────┐
│Task 1 ││Task 2 ││Task 3 │
└───┬───┘└───┬───┘└───┬───┘
│ │ │
└────────┼────────┘
│ Fan-In
┌────▼─────┐
│ Aggregate│
│ Results │
└──────────┘
Use case: Code review across multiple modules
# /review-all
Review all modules in parallel using subagents.
## Instructions
1. Identify all modules to review (src/modules/*)
2. For each module, spawn a subagent:
- Give it the module path
- Ask for security, performance, and code quality review
- Collect the review results
3. Aggregate all reviews into a unified report
4. Prioritize findings by severity
## Subagent Prompt Template
"Review the code in {module_path} for:
- Security vulnerabilities (Critical, High, Medium, Low)
- Performance issues
- Code quality concerns
Return findings in structured format."
## Output
Combined report with:
- Executive summary
- Module-by-module findings
- Prioritized action items
Pattern 2: Pipeline
Sequential processing where each stage transforms the output for the next.
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Parse │──▶│ Analyze │──▶│Transform│──▶│ Output │
│ │ │ │ │ │ │ │
└─────────┘ └─────────┘ └─────────┘ └─────────┘
Use case: Documentation generation
# /generate-docs
Generate documentation through a multi-stage pipeline.
## Pipeline Stages
### Stage 1: Extract
Subagent extracts:
- Function signatures
- Type definitions
- Existing comments
- Code structure
### Stage 2: Analyze
Subagent analyzes:
- Function purposes
- Parameter meanings
- Return value semantics
- Side effects
### Stage 3: Generate
Subagent generates:
- JSDoc comments
- README sections
- API reference
- Usage examples
### Stage 4: Validate
Subagent validates:
- Accuracy of descriptions
- Completeness
- Consistency with code
- Link validity
## Execution
Run each stage sequentially, passing output to next stage.
Pattern 3: Specialist Ensemble
Different specialists analyze the same input from their unique perspectives.
┌─────────────┐
│ Input │
│ (Code) │
└──────┬──────┘
┌───────────────┼───────────────┐
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Security │ │ Performance │ │Accessibility│
│ Specialist │ │ Specialist │ │ Specialist │
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
│ │ │
└───────────────┼───────────────┘
▼
┌─────────────┐
│ Combined │
│ Report │
└─────────────┘
Use case: Comprehensive code audit
# /audit
Perform a comprehensive audit using specialist subagents.
## Specialists
### Security Specialist
Focus:
- OWASP Top 10 vulnerabilities
- Authentication/authorization
- Data validation
- Encryption practices
Prompt: "You are a security expert. Analyze this code for vulnerabilities..."
### Performance Specialist
Focus:
- Algorithm complexity
- Database query efficiency
- Memory usage
- Network optimization
Prompt: "You are a performance expert. Analyze this code for bottlenecks..."
### Accessibility Specialist
Focus:
- WCAG 2.1 compliance
- Keyboard navigation
- Screen reader support
- Color contrast
Prompt: "You are an accessibility expert. Analyze this UI code for compliance..."
### Maintainability Specialist
Focus:
- Code complexity
- Test coverage
- Documentation
- Design patterns
Prompt: "You are a software architect. Analyze this code for maintainability..."
## Aggregation
Combine findings with cross-cutting analysis:
- Conflicting recommendations
- Synergistic improvements
- Prioritization across domains
Pattern 4: Divide and Conquer
Break a large problem into smaller pieces, solve each, then combine.
┌───────────────────────────────┐
│ Large Problem │
│ (1000 lines of code) │
└───────────────┬───────────────┘
│ Divide
┌─────────────┼─────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Part 1 │ │ Part 2 │ │ Part 3 │
│ 333 LOC │ │ 333 LOC │ │ 334 LOC │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└─────────────┼─────────────┘
│ Conquer
┌─────▼─────┐
│ Merged │
│ Solution │
└───────────┘
Use case: Large file refactoring
# /refactor-large
Refactor a large file using divide and conquer.
## Instructions
1. Analyze the file structure
2. Identify logical sections (classes, functions, modules)
3. For each section, spawn a subagent:
- Provide the section code
- Provide refactoring guidelines
- Ask for refactored version
4. Collect all refactored sections
5. Merge into final file, resolving conflicts
## Merge Strategy
- Preserve import ordering
- Deduplicate shared utilities
- Ensure type consistency
- Verify no circular dependencies
## Validation
After merge:
- Run type checker
- Run linter
- Run tests
- Compare behavior
Pattern 5: Competitive / Consensus
Multiple subagents solve the same problem, then compare solutions.
┌───────────────┐
│ Problem │
└───────┬───────┘
│
┌────────────┼────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│Solution │ │Solution │ │Solution │
│ A │ │ B │ │ C │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└────────────┼────────────┘
▼
┌───────────────┐
│ Compare & │
│ Select Best │
└───────────────┘
Use case: Complex algorithm implementation
# /implement-algorithm
Get multiple implementation approaches, then select best.
## Subagent Approaches
### Subagent A: Functional Approach
"Implement using pure functional patterns. Prioritize immutability and composability."
### Subagent B: Object-Oriented Approach
"Implement using OOP patterns. Use classes and encapsulation."
### Subagent C: Performance-Optimized Approach
"Implement with maximum performance. Use any patterns needed for speed."
## Comparison Criteria
1. **Correctness**: Does it pass all test cases?
2. **Readability**: How easy to understand?
3. **Performance**: Time and space complexity
4. **Maintainability**: How easy to modify?
5. **Testability**: How easy to test?
## Selection
Primary agent compares all solutions and either:
- Selects the best one
- Synthesizes a hybrid combining best aspects
- Presents options to user for final decision
Pattern 6: Hierarchical Delegation
Multi-level delegation for complex projects.
┌─────────────┐
│ Project │
│ Manager │
└──────┬──────┘
┌──────────────┼──────────────┐
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐
│ Frontend │ │ Backend │ │ DevOps │
│ Lead │ │ Lead │ │ Lead │
└─────┬─────┘ └─────┬─────┘ └─────┬─────┘
┌──┴──┐ ┌──┴──┐ ┌──┴──┐
▼ ▼ ▼ ▼ ▼ ▼
┌───┐ ┌───┐ ┌───┐ ┌───┐ ┌───┐ ┌───┐
│UI │ │UX │ │API│ │DB │ │CI │ │K8s│
└───┘ └───┘ └───┘ └───┘ └───┘ └───┘
Use case: Full-stack feature implementation
# /implement-feature
Implement a feature using hierarchical delegation.
## Project Manager
Coordinates overall implementation:
- Breaks feature into frontend, backend, devops tasks
- Spawns lead subagents for each area
- Aggregates progress and resolves blockers
## Frontend Lead
Coordinates UI/UX work:
- Spawns UI subagent for components
- Spawns UX subagent for user flows
- Integrates with backend contracts
## Backend Lead
Coordinates API/Data work:
- Spawns API subagent for endpoints
- Spawns DB subagent for schema/queries
- Ensures performance requirements
## DevOps Lead
Coordinates infrastructure:
- Spawns CI subagent for pipeline
- Spawns deployment subagent for K8s
- Ensures observability
## Communication
Leads report status to Project Manager.
Project Manager resolves cross-cutting concerns.
Implementing Subagents
Using the Task Tool
Claude Code's Task tool spawns subagents:
# Spawning a subagent
Use the Task tool to create a focused subagent:
Task: "Review src/auth/*.ts for security vulnerabilities.
Focus on: authentication bypasses, session management, password handling.
Return: JSON with {file, line, severity, issue, recommendation}"
Subagent Configuration
Configure subagent behavior:
# Subagent Configuration
## Context
Provide only the context needed for the task.
Too much context = slower, less focused.
Too little context = incomplete results.
## Output Format
Specify exact output format for easy aggregation.
JSON is usually best for programmatic combination.
## Scope Limits
Define what the subagent should NOT do:
- Don't modify files (just analyze)
- Don't make network requests
- Don't access other modules
Error Handling
Handle subagent failures gracefully:
# Error Handling
## Timeout
If subagent doesn't respond in 60s:
- Log the timeout
- Continue with other subagents
- Mark that result as unavailable
## Failure
If subagent reports error:
- Log the error details
- Try to continue without that result
- Report partial completion
## Invalid Output
If subagent output doesn't match expected format:
- Attempt to parse what's available
- Request clarification if possible
- Mark as partial result
Best Practices
1. Right-Size Your Subagents
Too Small: Overhead exceeds benefit
# Bad: One subagent per line of code
10 subagents for 10 lines = massive overhead
Too Large: Loses parallelization benefit
# Bad: One subagent for entire codebase
Might as well not use subagents
Just Right: Logical work units
# Good: One subagent per module/file/feature
Meaningful parallelization, manageable overhead
2. Minimize Context Sharing
Each subagent should be as independent as possible:
# Good: Independent tasks
Subagent A: Review auth module (self-contained)
Subagent B: Review payment module (self-contained)
# Bad: Interdependent tasks
Subagent A: Rename function across codebase
Subagent B: Update callers of renamed function
(These need to coordinate - use single agent instead)
3. Standardize Output Formats
Aggregation is easiest with consistent formats:
{
"module": "auth",
"findings": [
{
"type": "security",
"severity": "high",
"location": "login.ts:45",
"issue": "SQL injection vulnerability",
"recommendation": "Use parameterized queries"
}
],
"summary": {
"critical": 0,
"high": 1,
"medium": 3,
"low": 5
}
}
4. Plan for Partial Failure
Design for subagent failures:
# Resilient Aggregation
1. Collect results from all subagents (even partial)
2. Note which subagents failed and why
3. Produce results from what succeeded
4. Clearly mark incomplete areas
5. Offer to retry failed subagents
5. Monitor and Optimize
Track subagent performance:
# Performance Tracking
For each subagent invocation, record:
- Task description
- Time taken
- Success/failure
- Output size
Use this data to:
- Identify slow patterns
- Right-size task distribution
- Detect common failure modes
When NOT to Use Subagents
Subagents add overhead. Avoid them when:
1. Task is Simple
# Don't use subagents for:
- Reading a single file
- Making one small edit
- Running one test
2. Strong Dependencies Exist
# Don't parallelize when:
- Task B depends on Task A's output
- Changes must be coordinated
- Order matters
3. Context is Critical
# Use single agent when:
- Deep understanding of history needed
- Cross-file relationships are complex
- Conversation context is essential
4. Overhead Exceeds Benefit
# Don't use subagents when:
- Task would take 5s anyway
- Spawning/aggregating takes longer than sequential
- Few tasks to parallelize
Conclusion
Subagents transform Claude Code from a single-threaded assistant into a parallel computing platform. The patterns—fan-out/fan-in, pipeline, specialist ensemble, divide-and-conquer, competitive consensus, and hierarchical delegation—each serve different needs.
Start simple: use fan-out for code reviews across multiple files. As you get comfortable, explore specialist patterns for comprehensive audits. Eventually, you'll naturally reach for the right pattern for each situation.
The key insight: subagents aren't about doing more—they're about doing the right amount of work at the right level of specialization. When a task naturally decomposes into parallel, independent units, subagents shine. When it doesn't, a single focused agent is still best.
This completes our Claude Code deep-dive series. Ready to explore skills from other developers? Browse the AI Skill Market to find tools built by the community.