Automating Code Review with Claude Code Skills
Build a comprehensive code review automation skill. Learn to create AI-powered review workflows that catch bugs, enforce standards, and improve code quality.
Build a comprehensive code review automation skill. Learn to create AI-powered review workflows that catch bugs, enforce standards, and improve code quality.
Code review is essential but time-consuming. Senior developers spend hours reviewing PRs, often catching the same issues repeatedly. Junior developers wait for feedback. The whole team moves slower.
What if you could automate the mechanical parts of code review? Catch the obvious issues automatically, so human reviewers can focus on architecture, business logic, and mentoring?
This tutorial shows you how to build a comprehensive code review automation skill that integrates with your development workflow.
.claude/commands/review.md:
---
description: Comprehensive code review automation
arguments:
- name: target
description: File, directory, or PR number to review
required: true
- name: focus
description: Review focus (security, performance, style, all)
required: false
default: all
- name: output
description: Output format (inline, report, github)
required: false
default: report
---
# Automated Code Review
When the user runs /review [target], perform a comprehensive code review.
## Determine Review Scope
If target is a PR number:
1. Fetch PR details: `gh pr view [number]`
2. Get changed files: `gh pr diff [number] --name-only`
3. Review only changed lines
If target is a file or directory:
1. Find all source files
2. Review each file
## Review Process
For each file, analyze these categories:
### 1. Security Review
Check for:
- Hardcoded secrets (API keys, passwords, tokens)
- SQL injection vulnerabilities
- XSS vulnerabilities
- Insecure data handling
- Missing authentication/authorization
- Unsafe deserialization
- Path traversal risks
Severity levels:
- CRITICAL: Immediate security risk
- HIGH: Potential security issue
- MEDIUM: Best practice violation
- LOW: Minor concern
### 2. Performance Review
Check for:
- N+1 query patterns
- Unnecessary database calls
- Blocking operations in async code
- Memory leaks (event listeners, closures)
- Inefficient algorithms (nested loops on large data)
- Missing caching opportunities
- Large bundle imports
### 3. Code Quality Review
Check for:
- Functions over 50 lines
- Deep nesting (>3 levels)
- Magic numbers without constants
- Duplicated code
- Dead code
- Missing error handling
- Unclear variable names
- Missing type annotations
### 4. Documentation Review
Check for:
- Missing function documentation
- Outdated comments
- Unclear code without explanation
- Missing README updates for new features
- Undocumented environment variables
### 5. Testing Review
Check for:
- New code without tests
- Tests that don't test behavior
- Missing edge case tests
- Flaky test patterns
- Test code quality issues
## Generate Review
Format based on output argument:
### Output: report (default)
```markdown
# Code Review Report
**Target:** [file/PR]
**Date:** [date]
**Reviewer:** Claude Code
## Summary
- Critical issues: X
- High issues: Y
- Medium issues: Z
- Low issues: N
## Critical Issues (Must Fix)
### [Issue 1 Title]
**File:** path/to/file.ts:42
**Category:** Security
**Description:** [What's wrong]
**Recommendation:** [How to fix]
**Code:**
[problematic code]
## High Priority Issues
[...]
## Suggestions for Improvement
[...]
## Positive Observations
[What's done well]
Generate comments that can be added directly to code:
// REVIEW: [category] - [issue]Generate GitHub review format for PR comments.
### Security-Focused Review
**.claude/commands/security-review.md:**
```markdown
---
description: Deep security analysis of code
arguments:
- name: target
description: File or directory to analyze
required: true
---
# Security Code Review
Specialized security-focused code review.
## OWASP Top 10 Check
### A01: Broken Access Control
- Check authorization on all endpoints
- Verify user can only access their data
- Look for IDOR vulnerabilities
- Check for privilege escalation paths
### A02: Cryptographic Failures
- Identify sensitive data handling
- Check encryption algorithms (no MD5/SHA1 for passwords)
- Verify secure key storage
- Check TLS configuration
### A03: Injection
- SQL injection in database queries
- Command injection in shell calls
- LDAP injection
- XPath injection
- Template injection
### A04: Insecure Design
- Check business logic flaws
- Verify threat modeling was done
- Look for missing rate limiting
### A05: Security Misconfiguration
- Default credentials
- Unnecessary features enabled
- Missing security headers
- Verbose error messages
### A06: Vulnerable Components
- Check dependencies for known CVEs
- Outdated libraries
- Abandoned packages
### A07: Authentication Failures
- Weak password policies
- Missing MFA
- Session management issues
- Credential stuffing vulnerabilities
### A08: Software and Data Integrity
- Untrusted deserialization
- Missing integrity checks
- Insecure CI/CD
### A09: Logging and Monitoring
- Missing security logs
- Sensitive data in logs
- Missing alerting
### A10: Server-Side Request Forgery
- Unvalidated URLs
- Internal service access
- Cloud metadata access
## Output
Generate a security report with:
- Vulnerability classification
- CVSS score estimate
- Proof of concept if applicable
- Remediation steps
- References to security standards
.claude/commands/perf-review.md:
---
description: Performance-focused code analysis
arguments:
- name: target
description: File or directory to analyze
required: true
---
# Performance Code Review
Analyze code for performance issues.
## Database Performance
### Query Analysis
- Count database calls per operation
- Identify N+1 query patterns
- Check for missing indexes (based on WHERE clauses)
- Look for SELECT * usage
- Identify unneeded eager loading
### Example N+1 Detection
```javascript
// BAD: N+1 query pattern
const users = await User.findAll();
for (const user of users) {
const orders = await Order.findByUser(user.id); // N queries!
}
// GOOD: Single query with join
const users = await User.findAll({ include: Order });
import _ from 'lodash' vs import { map } from 'lodash'// BAD: Serial execution
const a = await fetchA();
const b = await fetchB();
const c = await fetchC();
// GOOD: Parallel execution
const [a, b, c] = await Promise.all([
fetchA(),
fetchB(),
fetchC()
]);
Generate performance report with:
## Integrating with GitHub
### PR Review Workflow
**.claude/commands/review-pr.md:**
```markdown
---
description: Review a GitHub PR and post comments
arguments:
- name: pr
description: PR number
required: true
- name: post
description: Whether to post review to GitHub
required: false
default: false
---
# GitHub PR Review
Review a PR and optionally post the review to GitHub.
## Fetch PR Information
1. Get PR details: `gh pr view [pr] --json title,body,author,files`
2. Get full diff: `gh pr diff [pr]`
3. Get existing comments: `gh pr view [pr] --comments`
## Analyze Changes
For each changed file:
1. Determine change type:
- New file
- Modified file
- Deleted file
- Renamed file
2. Review only changed lines:
- Focus on additions (+)
- Consider deletions (-) for context
3. Apply standard review criteria
## Compile Review
Structure the review:
### Summary
- Overall assessment: APPROVE / REQUEST_CHANGES / COMMENT
- Brief summary of the PR purpose
- Key concerns or praise
### File Comments
For each issue found:
- File path
- Line number (from diff)
- Comment text
- Suggestion if applicable
### General Comments
- Architecture observations
- Testing suggestions
- Documentation needs
## Post to GitHub (if --post)
Use GitHub CLI to submit review:
```bash
gh pr review [pr] --comment --body "Review body here"
# Or for specific line comments
gh api repos/{owner}/{repo}/pulls/{pr}/comments \
-f body="Comment text" \
-f path="file/path.ts" \
-f line=42 \
-f side="RIGHT"
PR Review: #[number] - [title]
Author: @[username]
Files Changed: [count]
## Summary
[Brief assessment]
## Verdict: [APPROVE/REQUEST_CHANGES/COMMENT]
## Issues Found
### Critical
- file.ts:42 - [Issue description]
### Suggestions
- file.ts:100 - [Suggestion]
## Positive Notes
- Good test coverage
- Clear naming conventions
[Posted to GitHub: Yes/No]
### Automated PR Review Hook
**.claude/hooks/pr-opened.md:**
```markdown
---
trigger:
event: pull_request
action: opened
---
# Automatic PR Review
When a PR is opened, automatically run initial review.
## Process
1. Wait 30 seconds (let CI start)
2. Run /review-pr [number]
3. If critical issues found, add comment
4. If no critical issues, add approval
## Behavior
Only run on PRs that:
- Are from repository members (not forks)
- Are not marked as draft
- Are not from bots
## Comment Format
I've performed an initial automated review of this PR.
[Brief summary of findings]
[List of issues or "No significant issues found"]
This is an automated review. A human reviewer will follow up.
.claude/commands/review-stats.md:
---
description: Generate code review statistics
arguments:
- name: period
description: Time period (week, month, quarter)
required: false
default: week
---
# Review Statistics
Generate statistics about code reviews.
## Gather Data
1. Get merged PRs in period: `gh pr list --state merged --limit 100`
2. For each PR, analyze:
- Time to first review
- Time to merge
- Number of review rounds
- Issues found by automated review
- Issues found by human review
## Calculate Metrics
### Review Efficiency
- Average time to first review
- Average time to merge
- Percentage of PRs with automated review issues
### Issue Categories
- Security issues found (automated vs human)
- Performance issues found
- Style issues found
### Review Quality
- PRs that required rework after merge
- Issues that slipped through review
## Generate Report
PRs Merged: 42 Average Time to Review: 4.2 hours Average Time to Merge: 18.6 hours
Issues Found by Automated Review: 127
Issues Found by Human Review: 45
Review Effectiveness:
Top Issue Categories:
Recommendations:
Create a configuration file for your review standards:
.claude/review-config.json:
{
"security": {
"enabled": true,
"failOnCritical": true,
"patterns": {
"secrets": [
"api[_-]?key",
"secret[_-]?key",
"password",
"bearer"
],
"sqlInjection": [
"query\\(.*\\+.*\\)",
"execute\\(.*\\+.*\\)"
]
}
},
"performance": {
"enabled": true,
"maxFunctionLines": 50,
"maxFileLines": 500,
"flagPatterns": [
"for.*for",
"SELECT \\*"
]
},
"style": {
"enabled": true,
"requireTypes": true,
"maxLineLength": 100,
"namingConvention": "camelCase"
},
"documentation": {
"enabled": true,
"requireFunctionDocs": true,
"requireClassDocs": true
},
"testing": {
"enabled": true,
"minCoverage": 80,
"requireTestsForNewFiles": true
}
}
Reference in your review skill:
## Load Configuration
Read `.claude/review-config.json` for review standards.
Apply configured rules and thresholds.
Skip disabled categories.
Begin with reviews that don't block PRs:
As confidence grows:
Monitor when automated reviews are wrong:
Automated review should:
Sometimes rules should be broken. Provide clear override mechanisms:
// @review-ignore: security/hardcoded-secret
// This is an example for documentation, not a real secret
const EXAMPLE_API_KEY = 'example-key-12345';
Automated code review catches mechanical issues so human reviewers can focus on what matters: architecture, business logic, and developer growth.
| Skill | Purpose |
|---|---|
/review | Comprehensive code review |
/security-review | Deep security analysis |
/perf-review | Performance analysis |
/review-pr | GitHub PR review |
/review-stats | Review metrics |
Want to automate documentation too? Check out Documentation Generation Skills for doc automation.
Build Model Context Protocol (MCP) servers and integrations. Create custom tools and capabilities for Claude.
Manage conditional pauses and delays in automated workflows.
Lab environment for experimenting with Claude superpowers and capabilities.
Create and manage command structures for agent workflows.