Skills vs RAG: When to Use Which (With Real Examples)
Skills and RAG solve different problems. This decision framework with real examples shows when to use each—and when to combine them.
Skills vs RAG: When to Use Which (With Real Examples)
The AI Agents Guidebook uses a memorable analogy: if the LLM is the brain, RAG is like having access to fresh information—newspapers, reports, real-time data. The agent, meanwhile, is the decision-maker who uses that brain and information to take action.
Skills and RAG aren't competing approaches—they solve different problems. But teams frequently confuse them, leading to suboptimal architectures. Understanding when to use each, and how to combine them effectively, is essential for building production AI systems.
This guide provides a clear decision framework with real examples.
Defining the Terms
Let's be precise about what we're comparing.
RAG (Retrieval-Augmented Generation)
RAG enhances LLM responses by retrieving relevant information at query time. The process:
- User asks a question
- System searches a knowledge base for relevant content
- Retrieved content is added to the LLM's context
- LLM generates response using both the question and retrieved content
RAG answers the question: "How do I give the model access to information it wasn't trained on?"
Example: A legal research system that retrieves relevant case law before answering questions about precedent.
Skills
Skills are packaged capabilities that extend what an agent can do. They include:
- System prompts that shape behavior
- Tools that enable actions
- Logic for handling specific workflows
- Guardrails for safety and quality
Skills answer the question: "How do I give the agent the ability to do something specific?"
Example: A contract analysis skill that knows how to read contracts, identify key terms, flag risks, and produce structured assessments.
The Key Distinction
RAG: Provides information the model needs but doesn't have. Skills: Provides capabilities the model needs but doesn't have.
Information vs. capability. Knowledge vs. action. That's the fundamental distinction.
When to Use RAG
RAG is the right choice when the core problem is knowledge access.
The Model Lacks Domain Knowledge
Foundation models are trained on general internet text. They lack:
- Proprietary company information
- Recent developments (post-training cutoff)
- Specialized domain corpora
- Internal documentation
Signal: The model gives confident but wrong answers because it doesn't have access to relevant information.
Example: Answering questions about your product
A customer asks, "Does your software support SSO integration?"
- Without RAG: Model might guess based on common patterns
- With RAG: System retrieves your actual documentation and answers accurately
Information Changes Frequently
Some information has a short shelf life:
- News and current events
- Market data
- Inventory levels
- Policy updates
Retraining or fine-tuning for dynamic information is impractical. RAG retrieves current information at query time.
Signal: Answers need to reflect information that changes weekly or more frequently.
Example: Investment research
An analyst asks about a company's recent earnings.
- Without RAG: Model only knows information from training cutoff
- With RAG: System retrieves latest earnings reports and analyst commentary
You Need Citations and Sources
Compliance, trust, and accountability often require showing where information came from.
Signal: Users need to verify information against sources, or regulations require auditability.
Example: Medical information system
A doctor queries about drug interactions.
- Without citations: Doctor can't verify accuracy, liability concerns
- With RAG + citations: Response includes links to specific studies and guidelines
The Knowledge Base Is Large
When relevant information spans thousands or millions of documents, you can't include everything in context. Retrieval selects what's relevant.
Signal: You have more domain information than fits in any context window.
Example: Technical support knowledge base
A support agent has access to 50,000 articles covering every product and issue.
- Without RAG: Agent can only use what fits in context
- With RAG: System retrieves the 5-10 most relevant articles for this specific issue
When to Use Skills
Skills are the right choice when the core problem is capability.
The Task Requires Specific Actions
Some tasks require doing things, not just knowing things:
- Executing code
- Calling APIs
- Modifying files
- Sending communications
Signal: The desired outcome is an action, not just information.
Example: Automated code review
A developer submits a pull request for review.
- RAG alone: Could retrieve coding standards but can't analyze the actual code
- Skill: Can read files, run linters, execute tests, and provide specific feedback
The Task Requires Domain Expertise
Domain expertise isn't just knowledge—it's knowing how to approach problems:
- What questions to ask
- What steps to follow
- What edge cases matter
- What outputs are useful
Signal: Experts follow specific methodologies that could be encoded as instructions.
Example: Contract risk assessment
A lawyer needs to assess a commercial contract.
- RAG alone: Could retrieve relevant contract law
- Skill: Follows systematic approach—check indemnification clauses, liability limits, termination provisions—and produces structured risk assessment
The Task Requires Multi-Step Reasoning
Complex tasks require breaking down problems and working through steps:
- Analyzing, then synthesizing
- Checking multiple conditions
- Iterating based on intermediate results
Signal: Experts would describe their process as a series of steps with decision points.
Example: Financial analysis
An analyst needs to evaluate an acquisition target.
- Simple query: "Is this company a good acquisition?"
- Skill approach: Analyze financials, assess market position, evaluate integration risks, synthesize recommendation
The Task Requires Tool Orchestration
Many tasks require using multiple tools in coordination:
- Query database, then analyze results
- Fetch web data, then process it
- Generate content, then validate it
Signal: The workflow involves multiple systems or data sources.
Example: Sales intelligence
A sales rep needs background on a prospect before a call.
- Single tool: Could query CRM or LinkedIn
- Skill: Queries CRM, checks recent news, finds LinkedIn connections, and synthesizes a briefing document
The Hybrid Approach: Skills + RAG
The most powerful AI systems combine skills and RAG. The skill provides capability; RAG provides knowledge.
Pattern 1: Skill With Retrieval Tool
The skill has access to a retrieval tool as one of its capabilities.
Implementation:
Skill: Legal Document Analyzer
- System prompt: Domain expertise for legal analysis
- Tools:
- retrieve_case_law(query) -> relevant precedents
- retrieve_regulations(jurisdiction) -> applicable rules
- analyze_document(document) -> structured analysis
- Workflow: Retrieve relevant context, then apply analysis
Example: Due diligence skill
The skill:
- Receives documents to analyze
- Retrieves relevant industry standards and precedents
- Applies systematic analysis methodology
- Produces structured assessment
RAG provides the knowledge. The skill provides the methodology.
Pattern 2: Skill That Manages RAG Pipeline
Some skills orchestrate the retrieval process itself, making smart decisions about what to retrieve.
Implementation:
Skill: Research Assistant
- System prompt: Research methodology expert
- Tools:
- search_papers(query) -> academic sources
- search_news(query) -> recent coverage
- search_internal(query) -> company documents
- synthesize(sources) -> integrated summary
- Workflow: Determine what sources are needed, retrieve intelligently, synthesize
Example: Competitive intelligence skill
The skill:
- Receives request: "Analyze competitor's new product launch"
- Decides what to retrieve: news coverage, analyst reports, social sentiment
- Executes targeted searches
- Synthesizes findings into actionable intelligence
The skill's expertise is knowing what to look for and how to interpret it.
Pattern 3: RAG-Augmented Decision Making
The skill makes decisions, with RAG providing relevant context at each step.
Implementation:
Skill: Medical Triage
- System prompt: Emergency triage protocols
- Tools:
- retrieve_symptoms(symptom_list) -> relevant conditions
- retrieve_protocols(condition) -> treatment guidelines
- document_decision(reasoning) -> audit trail
- Workflow: Assess symptoms, retrieve relevant medical info, apply protocols, document
Example: Insurance claims skill
The skill:
- Receives claim information
- Retrieves relevant policy terms for this claim type
- Retrieves precedent decisions for similar claims
- Applies decision framework
- Documents reasoning for audit trail
RAG provides policy and precedent knowledge. The skill applies the decision methodology.
Decision Framework
Use this framework to decide between RAG, skills, or hybrid approaches:
Question 1: What's Missing?
If the model gives wrong answers because it lacks information: → RAG is needed
If the model doesn't know how to approach the task: → Skill is needed
If both: → Hybrid approach
Question 2: What Does Success Look Like?
If success is a correct answer with citations: → RAG-focused
If success is a completed action or workflow: → Skill-focused
If success is both: → Hybrid approach
Question 3: How Complex Is the Task?
Single question, single answer: → RAG might be sufficient
Multi-step workflow with decisions: → Skill is needed
Multi-step workflow requiring external knowledge: → Hybrid approach
Quick Reference Table
| Situation | Approach |
|---|---|
| Answer questions about internal docs | RAG |
| Answer questions with citations | RAG |
| Provide current/recent information | RAG |
| Execute multi-step workflows | Skill |
| Apply domain methodology | Skill |
| Use multiple tools in sequence | Skill |
| Complex analysis with domain knowledge | Hybrid |
| Decision-making with precedent lookup | Hybrid |
| Research and synthesis tasks | Hybrid |
Real Examples
Example 1: Customer Support System
Scenario: Automated system to handle customer inquiries
Analysis:
- Needs product knowledge (company docs, policies) → RAG
- Needs ability to look up accounts, process returns → Skill
- Must follow support protocols → Skill
- Needs access to knowledge base articles → RAG
Architecture: Hybrid
The skill:
- Defines support agent behavior and protocols
- Has tools for account lookup, ticket management
- Includes retrieval tool for knowledge base
RAG components:
- Knowledge base with product documentation
- FAQ database
- Policy documents
Example 2: Code Documentation Generator
Scenario: Generate documentation from source code
Analysis:
- Needs to read code files → Tool (in skill)
- Needs coding standards knowledge → Could be in-context or RAG
- Needs methodology for documentation → Skill
- Output is action (writing docs), not just information
Architecture: Skill-focused (light RAG)
The skill:
- Defines documentation methodology
- Has tools for reading code, writing files
- May retrieve relevant style guides
Example 3: Market Research Platform
Scenario: Provide analysis of market trends and competitors
Analysis:
- Needs current market data → RAG
- Needs company information → RAG
- Needs analysis methodology → Skill
- Needs synthesis capabilities → Skill
Architecture: Hybrid (heavy on both)
The skill:
- Defines research and analysis methodology
- Orchestrates multiple retrieval sources
- Synthesizes findings
RAG components:
- News and publications database
- Financial data sources
- Industry reports
- Company filings
Example 4: Compliance Checker
Scenario: Check if documents comply with regulations
Analysis:
- Needs regulation text → RAG
- Needs precedent interpretations → RAG
- Needs systematic checking methodology → Skill
- Needs ability to highlight issues → Skill
Architecture: Hybrid
The skill:
- Defines compliance checking methodology
- Retrieves relevant regulations for document type
- Applies systematic review process
- Produces structured compliance report
Common Mistakes
Mistake 1: Using RAG When You Need a Skill
Symptom: System retrieves relevant information but doesn't know what to do with it.
Example: A contract review system retrieves relevant clauses but doesn't analyze risk or produce useful output.
Fix: Build a skill that defines the analysis methodology, with RAG providing supporting information.
Mistake 2: Using a Skill When You Need RAG
Symptom: Skill follows the right process but lacks needed information.
Example: A research skill produces structured analysis but makes claims not supported by actual sources.
Fix: Add retrieval capabilities to the skill so it has access to relevant information.
Mistake 3: Over-Engineering RAG
Symptom: Complex retrieval pipeline when simple prompting would work.
Example: Building a vector database for information that could just be included in the prompt.
Fix: Start with in-context information. Add RAG when context limits are exceeded.
Mistake 4: Under-Engineering RAG
Symptom: Poor retrieval quality leading to irrelevant context.
Example: Simple similarity search that returns topically related but not actually useful documents.
Fix: Invest in retrieval quality—better chunking, re-ranking, query expansion, evaluation.
Mistake 5: Conflating Skills and RAG
Symptom: Trying to encode methodology into retrieval or knowledge into skill prompts.
Example: Creating elaborate retrieval pipelines to "teach" the model how to analyze, rather than encoding analysis methodology in a skill.
Fix: Clear separation—skills provide capability, RAG provides knowledge.
Building Hybrid Systems
When building systems that combine skills and RAG:
Design Principles
Separate concerns: Keep skill logic and retrieval components distinct. The skill shouldn't have RAG logic embedded; it should use retrieval as a tool.
Evaluate separately: Measure retrieval quality independent of skill performance. Are you retrieving the right documents? Is the skill using them effectively?
Fail gracefully: When retrieval fails or returns nothing, the skill should handle it appropriately—acknowledge the limitation rather than making things up.
Monitor both: Track retrieval latency and quality alongside skill performance.
Implementation Patterns
Retrieval as tool:
The skill has a `retrieve(query)` tool that returns relevant documents.
The skill decides when and what to retrieve.
Pre-retrieval augmentation:
Before invoking the skill, the system retrieves relevant context
and includes it in the skill's input.
Iterative retrieval:
The skill retrieves, processes, and may retrieve again based on
what it learns from initial processing.
Conclusion
Skills and RAG solve different problems. RAG provides knowledge the model lacks. Skills provide capabilities the model lacks.
The decision framework is straightforward:
- Information problem? → RAG
- Capability problem? → Skill
- Both? → Hybrid
Most production systems are hybrid—skills that define methodology and capability, with RAG providing relevant knowledge at decision points.
The key is understanding what's actually missing. If the model knows what to do but lacks information, add RAG. If the model has information but doesn't know how to use it, add a skill. If both are missing, build a skill with retrieval capabilities.
Get this architecture decision right, and the rest of the system design follows naturally.
Next in this series: Enterprise AI Stack: How Fortune 500s Are Adopting Skills