Automating Code Review with Claude Code Skills
Build a comprehensive code review automation skill. Learn to create AI-powered review workflows that catch bugs, enforce standards, and improve code quality.
Automating Code Review with Claude Code Skills
Code review is essential but time-consuming. Senior developers spend hours reviewing PRs, often catching the same issues repeatedly. Junior developers wait for feedback. The whole team moves slower.
What if you could automate the mechanical parts of code review? Catch the obvious issues automatically, so human reviewers can focus on architecture, business logic, and mentoring?
This tutorial shows you how to build a comprehensive code review automation skill that integrates with your development workflow.
What Can Be Automated in Code Review?
Good Candidates for Automation
- Style violations: Inconsistent formatting, naming conventions
- Security issues: Hardcoded secrets, SQL injection patterns
- Performance problems: N+1 queries, unnecessary loops
- Common bugs: Null pointer patterns, off-by-one errors
- Documentation gaps: Missing comments, outdated docs
- Test coverage: Untested functions, missing edge cases
What Should Stay Human
- Architecture decisions: Is this the right approach?
- Business logic: Does this solve the problem correctly?
- User experience: Will users understand this interface?
- Mentoring opportunities: How can this developer grow?
Building the Review Skill
Core Review Command
.claude/commands/review.md:
---
description: Comprehensive code review automation
arguments:
- name: target
description: File, directory, or PR number to review
required: true
- name: focus
description: Review focus (security, performance, style, all)
required: false
default: all
- name: output
description: Output format (inline, report, github)
required: false
default: report
---
# Automated Code Review
When the user runs /review [target], perform a comprehensive code review.
## Determine Review Scope
If target is a PR number:
1. Fetch PR details: `gh pr view [number]`
2. Get changed files: `gh pr diff [number] --name-only`
3. Review only changed lines
If target is a file or directory:
1. Find all source files
2. Review each file
## Review Process
For each file, analyze these categories:
### 1. Security Review
Check for:
- Hardcoded secrets (API keys, passwords, tokens)
- SQL injection vulnerabilities
- XSS vulnerabilities
- Insecure data handling
- Missing authentication/authorization
- Unsafe deserialization
- Path traversal risks
Severity levels:
- CRITICAL: Immediate security risk
- HIGH: Potential security issue
- MEDIUM: Best practice violation
- LOW: Minor concern
### 2. Performance Review
Check for:
- N+1 query patterns
- Unnecessary database calls
- Blocking operations in async code
- Memory leaks (event listeners, closures)
- Inefficient algorithms (nested loops on large data)
- Missing caching opportunities
- Large bundle imports
### 3. Code Quality Review
Check for:
- Functions over 50 lines
- Deep nesting (>3 levels)
- Magic numbers without constants
- Duplicated code
- Dead code
- Missing error handling
- Unclear variable names
- Missing type annotations
### 4. Documentation Review
Check for:
- Missing function documentation
- Outdated comments
- Unclear code without explanation
- Missing README updates for new features
- Undocumented environment variables
### 5. Testing Review
Check for:
- New code without tests
- Tests that don't test behavior
- Missing edge case tests
- Flaky test patterns
- Test code quality issues
## Generate Review
Format based on output argument:
### Output: report (default)
```markdown
# Code Review Report
**Target:** [file/PR]
**Date:** [date]
**Reviewer:** Claude Code
## Summary
- Critical issues: X
- High issues: Y
- Medium issues: Z
- Low issues: N
## Critical Issues (Must Fix)
### [Issue 1 Title]
**File:** path/to/file.ts:42
**Category:** Security
**Description:** [What's wrong]
**Recommendation:** [How to fix]
**Code:**
[problematic code]
## High Priority Issues
[...]
## Suggestions for Improvement
[...]
## Positive Observations
[What's done well]
Output: inline
Generate comments that can be added directly to code:
- Add TODO comments for issues
- Add explanatory comments for fixes
- Use consistent format:
// REVIEW: [category] - [issue]
Output: github
Generate GitHub review format for PR comments.
### Security-Focused Review
**.claude/commands/security-review.md:**
```markdown
---
description: Deep security analysis of code
arguments:
- name: target
description: File or directory to analyze
required: true
---
# Security Code Review
Specialized security-focused code review.
## OWASP Top 10 Check
### A01: Broken Access Control
- Check authorization on all endpoints
- Verify user can only access their data
- Look for IDOR vulnerabilities
- Check for privilege escalation paths
### A02: Cryptographic Failures
- Identify sensitive data handling
- Check encryption algorithms (no MD5/SHA1 for passwords)
- Verify secure key storage
- Check TLS configuration
### A03: Injection
- SQL injection in database queries
- Command injection in shell calls
- LDAP injection
- XPath injection
- Template injection
### A04: Insecure Design
- Check business logic flaws
- Verify threat modeling was done
- Look for missing rate limiting
### A05: Security Misconfiguration
- Default credentials
- Unnecessary features enabled
- Missing security headers
- Verbose error messages
### A06: Vulnerable Components
- Check dependencies for known CVEs
- Outdated libraries
- Abandoned packages
### A07: Authentication Failures
- Weak password policies
- Missing MFA
- Session management issues
- Credential stuffing vulnerabilities
### A08: Software and Data Integrity
- Untrusted deserialization
- Missing integrity checks
- Insecure CI/CD
### A09: Logging and Monitoring
- Missing security logs
- Sensitive data in logs
- Missing alerting
### A10: Server-Side Request Forgery
- Unvalidated URLs
- Internal service access
- Cloud metadata access
## Output
Generate a security report with:
- Vulnerability classification
- CVSS score estimate
- Proof of concept if applicable
- Remediation steps
- References to security standards
Performance Review Skill
.claude/commands/perf-review.md:
---
description: Performance-focused code analysis
arguments:
- name: target
description: File or directory to analyze
required: true
---
# Performance Code Review
Analyze code for performance issues.
## Database Performance
### Query Analysis
- Count database calls per operation
- Identify N+1 query patterns
- Check for missing indexes (based on WHERE clauses)
- Look for SELECT * usage
- Identify unneeded eager loading
### Example N+1 Detection
```javascript
// BAD: N+1 query pattern
const users = await User.findAll();
for (const user of users) {
const orders = await Order.findByUser(user.id); // N queries!
}
// GOOD: Single query with join
const users = await User.findAll({ include: Order });
Algorithm Complexity
Time Complexity
- Flag O(n^2) or worse in hot paths
- Identify unnecessary nested loops
- Suggest more efficient alternatives
Space Complexity
- Large object creation in loops
- Memory leaks (closures, event listeners)
- Unbounded caches
Frontend Performance
Bundle Size
- Large imports:
import _ from 'lodash'vsimport { map } from 'lodash' - Unnecessary dependencies
- Missing code splitting
Rendering
- Unnecessary re-renders
- Missing memoization
- Heavy computations in render path
Network
- Missing caching headers
- Unoptimized images
- Too many requests
Async Performance
Blocking Operations
- Synchronous I/O in async code
- Missing parallelization
- Serial async when parallel possible
Example
// BAD: Serial execution
const a = await fetchA();
const b = await fetchB();
const c = await fetchC();
// GOOD: Parallel execution
const [a, b, c] = await Promise.all([
fetchA(),
fetchB(),
fetchC()
]);
Output
Generate performance report with:
- Issue classification
- Performance impact estimate
- Benchmarking suggestions
- Optimized code examples
## Integrating with GitHub
### PR Review Workflow
**.claude/commands/review-pr.md:**
```markdown
---
description: Review a GitHub PR and post comments
arguments:
- name: pr
description: PR number
required: true
- name: post
description: Whether to post review to GitHub
required: false
default: false
---
# GitHub PR Review
Review a PR and optionally post the review to GitHub.
## Fetch PR Information
1. Get PR details: `gh pr view [pr] --json title,body,author,files`
2. Get full diff: `gh pr diff [pr]`
3. Get existing comments: `gh pr view [pr] --comments`
## Analyze Changes
For each changed file:
1. Determine change type:
- New file
- Modified file
- Deleted file
- Renamed file
2. Review only changed lines:
- Focus on additions (+)
- Consider deletions (-) for context
3. Apply standard review criteria
## Compile Review
Structure the review:
### Summary
- Overall assessment: APPROVE / REQUEST_CHANGES / COMMENT
- Brief summary of the PR purpose
- Key concerns or praise
### File Comments
For each issue found:
- File path
- Line number (from diff)
- Comment text
- Suggestion if applicable
### General Comments
- Architecture observations
- Testing suggestions
- Documentation needs
## Post to GitHub (if --post)
Use GitHub CLI to submit review:
```bash
gh pr review [pr] --comment --body "Review body here"
# Or for specific line comments
gh api repos/{owner}/{repo}/pulls/{pr}/comments \
-f body="Comment text" \
-f path="file/path.ts" \
-f line=42 \
-f side="RIGHT"
Output Format
PR Review: #[number] - [title]
Author: @[username]
Files Changed: [count]
## Summary
[Brief assessment]
## Verdict: [APPROVE/REQUEST_CHANGES/COMMENT]
## Issues Found
### Critical
- file.ts:42 - [Issue description]
### Suggestions
- file.ts:100 - [Suggestion]
## Positive Notes
- Good test coverage
- Clear naming conventions
[Posted to GitHub: Yes/No]
### Automated PR Review Hook
**.claude/hooks/pr-opened.md:**
```markdown
---
trigger:
event: pull_request
action: opened
---
# Automatic PR Review
When a PR is opened, automatically run initial review.
## Process
1. Wait 30 seconds (let CI start)
2. Run /review-pr [number]
3. If critical issues found, add comment
4. If no critical issues, add approval
## Behavior
Only run on PRs that:
- Are from repository members (not forks)
- Are not marked as draft
- Are not from bots
## Comment Format
Automated Code Review
I've performed an initial automated review of this PR.
Summary
[Brief summary of findings]
Issues Found
[List of issues or "No significant issues found"]
This is an automated review. A human reviewer will follow up.
Building a Review Dashboard
Coverage Tracking
.claude/commands/review-stats.md:
---
description: Generate code review statistics
arguments:
- name: period
description: Time period (week, month, quarter)
required: false
default: week
---
# Review Statistics
Generate statistics about code reviews.
## Gather Data
1. Get merged PRs in period: `gh pr list --state merged --limit 100`
2. For each PR, analyze:
- Time to first review
- Time to merge
- Number of review rounds
- Issues found by automated review
- Issues found by human review
## Calculate Metrics
### Review Efficiency
- Average time to first review
- Average time to merge
- Percentage of PRs with automated review issues
### Issue Categories
- Security issues found (automated vs human)
- Performance issues found
- Style issues found
### Review Quality
- PRs that required rework after merge
- Issues that slipped through review
## Generate Report
Code Review Statistics - [Period]
PRs Merged: 42 Average Time to Review: 4.2 hours Average Time to Merge: 18.6 hours
Issues Found by Automated Review: 127
- Security: 12
- Performance: 23
- Style: 67
- Documentation: 25
Issues Found by Human Review: 45
- Architecture: 18
- Business Logic: 15
- Other: 12
Review Effectiveness:
- 94% of security issues caught before merge
- 87% of performance issues caught before merge
Top Issue Categories:
- Missing error handling (23 occurrences)
- Incomplete test coverage (19 occurrences)
- Documentation gaps (17 occurrences)
Recommendations:
- Add error handling checklist to PR template
- Require test coverage report in CI
- Schedule documentation review sessions
Review Standards Configuration
Create a configuration file for your review standards:
.claude/review-config.json:
{
"security": {
"enabled": true,
"failOnCritical": true,
"patterns": {
"secrets": [
"api[_-]?key",
"secret[_-]?key",
"password",
"bearer"
],
"sqlInjection": [
"query\\(.*\\+.*\\)",
"execute\\(.*\\+.*\\)"
]
}
},
"performance": {
"enabled": true,
"maxFunctionLines": 50,
"maxFileLines": 500,
"flagPatterns": [
"for.*for",
"SELECT \\*"
]
},
"style": {
"enabled": true,
"requireTypes": true,
"maxLineLength": 100,
"namingConvention": "camelCase"
},
"documentation": {
"enabled": true,
"requireFunctionDocs": true,
"requireClassDocs": true
},
"testing": {
"enabled": true,
"minCoverage": 80,
"requireTestsForNewFiles": true
}
}
Reference in your review skill:
## Load Configuration
Read `.claude/review-config.json` for review standards.
Apply configured rules and thresholds.
Skip disabled categories.
Best Practices
1. Start Non-Blocking
Begin with reviews that don't block PRs:
- Add comments but don't require changes
- Build trust in the system
- Tune for false positives
2. Gradually Increase Enforcement
As confidence grows:
- Week 1-2: Comment only
- Week 3-4: Request changes on critical issues
- Week 5+: Block PRs with unfixed critical issues
3. Track False Positives
Monitor when automated reviews are wrong:
- Log dismissed comments
- Review patterns in false positives
- Update rules to reduce noise
4. Complement Humans
Automated review should:
- Handle mechanical checks
- Free humans for high-value review
- Never replace architectural feedback
- Support, not replace, mentoring
5. Make It Easy to Override
Sometimes rules should be broken. Provide clear override mechanisms:
// @review-ignore: security/hardcoded-secret
// This is an example for documentation, not a real secret
const EXAMPLE_API_KEY = 'example-key-12345';
Summary
Automated code review catches mechanical issues so human reviewers can focus on what matters: architecture, business logic, and developer growth.
Skills Created
| Skill | Purpose |
|---|---|
/review | Comprehensive code review |
/security-review | Deep security analysis |
/perf-review | Performance analysis |
/review-pr | GitHub PR review |
/review-stats | Review metrics |
Integration Points
- Pre-commit hooks for local review
- CI/CD for automated PR review
- GitHub Actions for comment posting
- Dashboard for metrics tracking
Want to automate documentation too? Check out Documentation Generation Skills for doc automation.