Better Debug Logging With AI
Enhanced debugging output with AI analysis. Learn structured logging, contextual log enrichment, and AI-powered log analysis for faster bug resolution.
Better Debug Logging With AI
The default developer approach to debugging is console.log('here'). When that doesn't work, it becomes console.log('here 2'). When that fails, it escalates to console.log('value:', JSON.stringify(value, null, 2)). This process repeats until the bug is found or the developer loses patience.
AI transforms debugging from reactive log insertion to proactive log architecture. Instead of adding logs after a bug appears, AI helps you build logging systems that capture the right context before bugs happen. And when bugs do happen, AI analyzes log patterns to identify root causes in minutes instead of hours.
Key Takeaways
- Structured logging reduces debug time by 80% compared to unstructured console.log by making logs searchable and parseable
- Contextual enrichment adds request IDs, user context, and timing data to every log line automatically
- AI log analysis identifies patterns across thousands of log lines that human scanning would miss
- Log levels used correctly eliminate the "too many logs" problem without losing diagnostic value
- Five logging patterns cover 95% of debugging scenarios in AI-powered applications
The Problem With console.log
Why Developers Default to It
console.log is immediate, requires no setup, and works everywhere. These are genuine advantages. The problem isn't console.log itself -- it's using it as your primary debugging strategy.
The cost becomes clear in production. When a user reports "the page loaded slowly," unstructured logs give you:
loading data...
data loaded
rendering...
done
This tells you nothing. When did each step happen? What data was loaded? How long did rendering take? What user triggered this flow?
What Structured Logging Gives You
The same flow with structured logging:
{"level":"info","msg":"loading data","requestId":"abc123","userId":"user456","timestamp":"2026-06-18T10:00:00.000Z"}
{"level":"info","msg":"data loaded","requestId":"abc123","duration_ms":342,"recordCount":47,"timestamp":"2026-06-18T10:00:00.342Z"}
{"level":"info","msg":"rendering","requestId":"abc123","componentCount":12,"timestamp":"2026-06-18T10:00:00.345Z"}
{"level":"info","msg":"done","requestId":"abc123","totalDuration_ms":523,"timestamp":"2026-06-18T10:00:00.523Z"}
Now you can answer: How long did data loading take? 342ms. How many records? 47. Which user? user456. What was the total request time? 523ms.
Setting Up Structured Logging
The Logger Module
Create a centralized logger that all modules use:
// lib/logger.ts
type LogLevel = 'debug' | 'info' | 'warn' | 'error'
interface LogContext {
requestId?: string
userId?: string
[key: string]: any
}
const LOG_LEVELS: Record<LogLevel, number> = {
debug: 0,
info: 1,
warn: 2,
error: 3,
}
const currentLevel = (process.env.LOG_LEVEL || 'info') as LogLevel
function createLogger(module: string) {
return {
debug: (msg: string, ctx?: LogContext) => log('debug', module, msg, ctx),
info: (msg: string, ctx?: LogContext) => log('info', module, msg, ctx),
warn: (msg: string, ctx?: LogContext) => log('warn', module, msg, ctx),
error: (msg: string, ctx?: LogContext) => log('error', module, msg, ctx),
}
}
function log(level: LogLevel, module: string, msg: string, ctx?: LogContext) {
if (LOG_LEVELS[level] < LOG_LEVELS[currentLevel]) return
const entry = {
level,
module,
msg,
timestamp: new Date().toISOString(),
...ctx,
}
if (level === 'error') {
console.error(JSON.stringify(entry))
} else {
console.log(JSON.stringify(entry))
}
}
export { createLogger }
Usage:
import { createLogger } from '@/lib/logger'
const log = createLogger('skills-api')
export async function getSkills(userId: string) {
const requestId = crypto.randomUUID()
log.info('fetching skills', { requestId, userId })
const startTime = performance.now()
const skills = await supabase.from('skills').select('*')
const duration = performance.now() - startTime
log.info('skills fetched', {
requestId,
userId,
count: skills.data?.length,
duration_ms: Math.round(duration),
})
return skills
}
Log Levels Done Right
Most developers use info for everything. Here's when to use each level:
| Level | When | Example |
|---|---|---|
debug | Detailed diagnostic info for developers | Variable values, function entry/exit |
info | Normal operations worth recording | Request completed, skill installed |
warn | Something unexpected but handled | Retry attempt, fallback used |
error | Something failed and needs attention | API call failed, database unreachable |
The critical insight: debug logs should be numerous and detailed. info logs should be sparse and meaningful. In production, set LOG_LEVEL=info to suppress debug noise. When debugging a specific issue, set LOG_LEVEL=debug for the affected module.
Five Logging Patterns for AI Applications
Pattern 1: Request Lifecycle Logging
Log the start and end of every significant operation:
async function processSkillInstall(skillId: string, userId: string) {
const requestId = crypto.randomUUID()
const log = createLogger('install')
log.info('install started', { requestId, skillId, userId })
try {
const skill = await getSkill(skillId)
log.debug('skill loaded', { requestId, skillTitle: skill.title })
await incrementInstallCount(skillId)
log.debug('count incremented', { requestId })
await recordInstallEvent(skillId, userId)
log.info('install completed', { requestId, skillId, userId })
return { success: true }
} catch (error) {
log.error('install failed', {
requestId,
skillId,
userId,
error: error.message,
stack: error.stack,
})
return { success: false, error: error.message }
}
}
The requestId ties all log entries for a single operation together. When debugging, filter by requestId to see the complete lifecycle.
Pattern 2: Performance Timing
Wrap expensive operations with timing:
function withTiming<T>(
logger: ReturnType<typeof createLogger>,
operation: string,
ctx: LogContext,
fn: () => Promise<T>
): Promise<T> {
const startTime = performance.now()
return fn().then(
(result) => {
logger.info(`${operation} completed`, {
...ctx,
duration_ms: Math.round(performance.now() - startTime),
})
return result
},
(error) => {
logger.error(`${operation} failed`, {
...ctx,
duration_ms: Math.round(performance.now() - startTime),
error: error.message,
})
throw error
}
)
}
// Usage
const skills = await withTiming(log, 'database query', { query: 'skills' }, () =>
supabase.from('skills').select('*').limit(20)
)
Pattern 3: AI API Call Logging
Log AI model interactions with context for debugging and cost tracking:
async function callModel(prompt: string, ctx: LogContext) {
const log = createLogger('ai')
log.info('model call started', {
...ctx,
promptLength: prompt.length,
model: 'claude-sonnet-4-20250514',
})
const startTime = performance.now()
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 4096,
messages: [{ role: 'user', content: prompt }],
})
const duration = performance.now() - startTime
log.info('model call completed', {
...ctx,
duration_ms: Math.round(duration),
inputTokens: response.usage.input_tokens,
outputTokens: response.usage.output_tokens,
stopReason: response.stop_reason,
})
return response
}
This pattern is essential for tracking AI costs and performance. When a skill suddenly becomes slow or expensive, these logs reveal whether the prompt grew, the model changed, or the response became verbose.
Pattern 4: Error Context Enrichment
When errors occur, capture the full context:
class EnrichedError extends Error {
context: Record<string, any>
constructor(message: string, context: Record<string, any>) {
super(message)
this.context = context
}
}
// In your code
try {
await installSkill(skillId)
} catch (error) {
throw new EnrichedError('Skill installation failed', {
skillId,
userId,
originalError: error.message,
timestamp: new Date().toISOString(),
})
}
When this error reaches your error handler, the context tells you exactly what was happening when the failure occurred.
Pattern 5: State Change Logging
For stateful operations, log the before and after:
async function updateSkillMetadata(skillId: string, updates: Partial<Skill>) {
const log = createLogger('skills')
const before = await getSkill(skillId)
log.debug('updating skill', {
skillId,
changes: Object.keys(updates),
before: {
title: before.title,
category: before.category,
},
})
await supabase.from('skills').update(updates).eq('id', skillId)
const after = await getSkill(skillId)
log.info('skill updated', {
skillId,
changes: Object.keys(updates),
after: {
title: after.title,
category: after.category,
},
})
}
AI-Powered Log Analysis
Once you have structured logs, AI can analyze patterns that humans would miss.
Pattern Detection
Feed a batch of error logs to Claude and ask for pattern analysis:
Analyze these error logs from the last 24 hours.
Identify:
1. The most common error types
2. Time-based patterns (do errors cluster at specific times?)
3. User-based patterns (do errors affect specific users?)
4. Correlation with recent deployments
5. Suggested root causes
AI excels at this because it can process thousands of structured log entries and identify patterns across multiple dimensions simultaneously.
Building a Log Analysis Skill
Create a reusable skill for log analysis:
## Log Analyzer Skill
When given a set of structured log entries:
1. Parse the JSON entries
2. Group by error type and count occurrences
3. Identify temporal patterns (clusters within 5-minute windows)
4. Correlate errors with preceding warning/info events
5. Suggest probable root causes based on the error-to-context mapping
6. Rank suggestions by confidence level
This skill turns raw logs into actionable diagnoses, reducing the time from mysterious debugger errors to root cause identification.
FAQ
Won't structured logging slow down my application?
JSON serialization adds microseconds per log entry. This is negligible for web applications that spend milliseconds on network I/O and database queries. If you're logging in a hot loop, use debug level and disable it in production.
How do I handle sensitive data in logs?
Never log passwords, API keys, or personal data. Create a sanitizer function that strips sensitive fields before logging. Include field allowlists rather than blocklists to prevent accidental exposure.
Should I use a logging library or build my own?
For production applications, use a library like Pino or Winston that handles log rotation, transports, and performance optimization. The custom logger in this article is for understanding the concepts -- not for production use.
How much should I log?
Log every state transition, every external call, and every error. Don't log loop iterations, variable assignments, or intermediate calculations. The goal is a narrative of what happened, not a line-by-line replay.
Can AI replace traditional debugging tools?
No. AI-powered log analysis complements debuggers, profilers, and other tools. Use debuggers for stepping through code, profilers for performance analysis, and AI-powered logging for pattern detection and root cause analysis.
Sources
- The Twelve-Factor App: Logs -- Logging best practices for cloud applications
- Pino Logger -- High-performance Node.js logging library
- OpenTelemetry -- Observability framework for distributed systems
- Anthropic Claude Code -- AI-assisted debugging capabilities
Explore production-ready AI skills at aiskill.market/browse or submit your own skill to the marketplace.