Build a Deep Researcher: Multi-Agent Tutorial
Create a multi-agent deep research system using Linkup, CrewAI, and Ollama. Build agents that collaborate to research complex topics comprehensively.
Build a Deep Researcher: Multi-Agent Tutorial
Research tasks that seem simple often hide complexity. "Research the impact of AI on healthcare" requires finding relevant sources, evaluating credibility, extracting key findings, identifying patterns across sources, resolving contradictions, and synthesizing insights. No single agent handles all of this well.
Multi-agent research systems divide these responsibilities among specialized agents. A search agent finds sources. An analyst agent extracts insights. A fact-checker agent validates claims. A synthesizer agent combines findings. The result is more thorough research than any single agent could achieve.
This tutorial builds a deep researcher using CrewAI for agent orchestration, Linkup for intelligent web search, and Ollama for local LLM inference. You will create a system that researches complex topics with depth and rigor.
System Architecture
Research Pipeline
Our deep researcher follows a multi-phase approach:
Topic → Query Generation → Source Discovery → Content Extraction →
Analysis → Fact Checking → Synthesis → Report Generation
Each phase has dedicated agents:
1. Research Planner Breaks down the topic into research questions and search strategies.
2. Source Scout Finds relevant sources using various search strategies.
3. Content Analyst Extracts key information and insights from sources.
4. Fact Checker Validates claims and identifies contradictions.
5. Synthesizer Combines findings into coherent insights.
6. Report Writer Produces the final research report.
Why These Components?
Linkup: Provides semantic search that understands research intent, not just keywords. Returns structured results with credibility signals.
CrewAI: Orchestrates multiple agents with defined roles, goals, and task dependencies. Handles agent communication and workflow management.
Ollama: Runs LLMs locally for cost-effective inference. Ideal for research tasks that require many LLM calls.
Project Setup
Project Structure
deep-researcher/
├── agents/
│ ├── __init__.py
│ ├── planner.py
│ ├── scout.py
│ ├── analyst.py
│ ├── fact_checker.py
│ ├── synthesizer.py
│ └── writer.py
├── tools/
│ ├── __init__.py
│ ├── linkup_search.py
│ ├── content_extractor.py
│ └── citation_manager.py
├── workflows/
│ ├── __init__.py
│ └── research_crew.py
├── output/
│ └── (generated reports)
├── config/
│ └── settings.py
├── main.py
├── requirements.txt
└── README.md
Dependencies
# requirements.txt
crewai>=0.28.0
crewai-tools>=0.2.0
linkup-sdk>=0.2.0
ollama>=0.1.0
langchain>=0.1.0
langchain-community>=0.0.20
beautifulsoup4>=4.12.0
pydantic>=2.0.0
python-dotenv>=1.0.0
aiohttp>=3.9.0
Configuration
# config/settings.py
from pydantic_settings import BaseSettings
from typing import Optional
class Settings(BaseSettings):
# Linkup
linkup_api_key: str
# Ollama
ollama_base_url: str = "http://localhost:11434"
ollama_model: str = "llama3:8b"
# Optional OpenAI for comparison/fallback
openai_api_key: Optional[str] = None
# Research settings
max_sources_per_query: int = 10
max_content_length: int = 5000
research_depth: str = "comprehensive" # quick, standard, comprehensive
# Output settings
output_format: str = "markdown" # markdown, html, pdf
class Config:
env_file = ".env"
settings = Settings()
Ollama Setup
Ensure Ollama is running with your preferred model:
# Install Ollama (macOS)
brew install ollama
# Start Ollama service
ollama serve
# Pull the model
ollama pull llama3:8b
# For longer context research, consider:
ollama pull llama3:70b
Building the Tools
Linkup Search Tool
# tools/linkup_search.py
from linkup import LinkupClient
from crewai_tools import tool
from typing import List, Dict
import asyncio
class LinkupSearchTool:
def __init__(self, api_key: str):
self.client = LinkupClient(api_key=api_key)
@tool("Deep Web Search")
def search(
self,
query: str,
depth: str = "standard",
max_results: int = 10
) -> str:
"""
Search the web deeply for information on a topic.
Args:
query: The search query
depth: Search depth - 'quick', 'standard', or 'deep'
max_results: Maximum number of results to return
"""
results = self.client.search(
query=query,
depth=depth,
output_type="searchResults"
)
formatted = []
for i, result in enumerate(results.results[:max_results], 1):
formatted.append(f"""
Result {i}:
Title: {result.title}
URL: {result.url}
Snippet: {result.content[:500]}
Relevance: {result.score:.2f}
---""")
return "\n".join(formatted)
@tool("Source Content Fetch")
def fetch_content(self, url: str) -> str:
"""
Fetch and extract the main content from a URL.
Args:
url: The URL to fetch content from
"""
result = self.client.search(
query=f"site:{url}",
depth="deep",
output_type="sourcedAnswer"
)
if result.sources:
source = result.sources[0]
return f"""
Title: {source.title}
URL: {source.url}
Content:
{source.content[:5000]}
"""
return "Could not fetch content from this URL."
@tool("Academic Search")
def academic_search(self, query: str, max_results: int = 5) -> str:
"""
Search for academic and research sources.
Args:
query: Research query
max_results: Maximum results
"""
# Modify query to target academic sources
academic_query = f"{query} site:arxiv.org OR site:scholar.google.com OR site:ncbi.nlm.nih.gov OR site:ieee.org OR filetype:pdf"
return self.search(academic_query, depth="deep", max_results=max_results)
Content Extractor
# tools/content_extractor.py
from crewai_tools import tool
from bs4 import BeautifulSoup
import aiohttp
import asyncio
from typing import Dict, List
class ContentExtractor:
@tool("Extract Key Points")
def extract_key_points(self, content: str) -> str:
"""
Extract key points and claims from content.
Args:
content: The content to analyze
"""
# This would typically use an LLM, but we structure it as a tool
# The agent will process the output
return f"""
Analyze this content and extract:
1. Main claims or arguments
2. Supporting evidence
3. Key statistics or data points
4. Expert opinions cited
5. Conclusions drawn
Content to analyze:
{content[:3000]}
"""
@tool("Compare Sources")
def compare_sources(self, sources: List[Dict]) -> str:
"""
Compare multiple sources for consistency and contradictions.
Args:
sources: List of source contents to compare
"""
comparison_prompt = "Compare these sources:\n\n"
for i, source in enumerate(sources, 1):
comparison_prompt += f"""
Source {i}: {source.get('title', 'Unknown')}
Key claims: {source.get('claims', 'Not extracted')}
---
"""
comparison_prompt += """
Identify:
1. Points of agreement
2. Contradictions
3. Unique insights from each source
4. Credibility indicators
"""
return comparison_prompt
class CitationManager:
def __init__(self):
self.citations = []
@tool("Add Citation")
def add_citation(
self,
title: str,
url: str,
author: str = "",
date: str = "",
accessed: str = ""
) -> str:
"""
Add a citation to the reference list.
Args:
title: Title of the source
url: URL of the source
author: Author if known
date: Publication date if known
accessed: Date accessed
"""
citation = {
"id": len(self.citations) + 1,
"title": title,
"url": url,
"author": author,
"date": date,
"accessed": accessed
}
self.citations.append(citation)
return f"Citation added: [{citation['id']}] {title}"
@tool("Get All Citations")
def get_citations(self) -> str:
"""Get all citations in formatted reference list."""
if not self.citations:
return "No citations recorded."
formatted = ["## References\n"]
for c in self.citations:
ref = f"[{c['id']}] {c['title']}"
if c['author']:
ref += f" by {c['author']}"
if c['date']:
ref += f" ({c['date']})"
ref += f"\n URL: {c['url']}"
if c['accessed']:
ref += f"\n Accessed: {c['accessed']}"
formatted.append(ref)
return "\n\n".join(formatted)
Building the Agents
Research Planner Agent
# agents/planner.py
from crewai import Agent
from langchain_community.llms import Ollama
def create_research_planner(llm: Ollama) -> Agent:
return Agent(
role='Research Planner',
goal='Create comprehensive research plans that ensure thorough coverage of complex topics',
backstory='''You are an expert research strategist with decades of experience
in academic and professional research. You excel at:
- Breaking down complex topics into researchable questions
- Identifying the key aspects that need investigation
- Anticipating information gaps and planning for them
- Designing search strategies that find diverse, high-quality sources
- Prioritizing research efforts for maximum insight''',
llm=llm,
verbose=True,
allow_delegation=True
)
def create_planning_task(agent: Agent, topic: str, depth: str = "comprehensive"):
depth_instructions = {
"quick": "Focus on 3-4 key questions, prioritize recent and authoritative sources.",
"standard": "Cover 5-7 key questions, balance breadth and depth.",
"comprehensive": "Investigate 8-10 questions, include academic sources, historical context, and current developments."
}
return f"""Create a research plan for the topic: "{topic}"
Research Depth: {depth}
{depth_instructions.get(depth, depth_instructions["standard"])}
Your plan should include:
1. **Topic Analysis**
- Core concepts to understand
- Key terms and definitions
- Scope boundaries (what to include/exclude)
2. **Research Questions**
- Primary questions (must answer)
- Secondary questions (should answer if possible)
- Exploratory questions (interesting to investigate)
3. **Search Strategy**
- Specific search queries to use
- Types of sources to prioritize
- Academic vs. industry vs. news sources balance
4. **Information Needs**
- Facts and data points needed
- Expert perspectives to seek
- Case studies or examples to find
5. **Quality Criteria**
- How to evaluate source credibility
- Recency requirements
- Diversity of perspectives needed
Output as a structured research plan that other agents can follow."""
Source Scout Agent
# agents/scout.py
from crewai import Agent
from langchain_community.llms import Ollama
from tools.linkup_search import LinkupSearchTool
def create_source_scout(llm: Ollama, search_tool: LinkupSearchTool) -> Agent:
return Agent(
role='Source Scout',
goal='Find high-quality, relevant sources that provide comprehensive coverage of research topics',
backstory='''You are an expert research librarian and information specialist.
You know how to:
- Craft effective search queries that find the best sources
- Evaluate source credibility and relevance quickly
- Find diverse perspectives on any topic
- Locate academic papers, industry reports, and expert analyses
- Navigate different types of sources effectively''',
llm=llm,
tools=[
search_tool.search,
search_tool.fetch_content,
search_tool.academic_search
],
verbose=True
)
def create_scouting_task(agent: Agent, research_plan: str):
return f"""Execute the search strategy from this research plan:
{research_plan}
For each search query in the plan:
1. Execute the search
2. Evaluate results for relevance and quality
3. Fetch content from promising sources
4. Note source credibility indicators
Track and report:
- Total sources found
- Sources selected for deeper analysis
- Any gaps in coverage
- Suggestions for additional searches
Compile a source list with:
- URL
- Title
- Relevance score (1-10)
- Credibility indicators
- Key content preview"""
Content Analyst Agent
# agents/analyst.py
from crewai import Agent
from langchain_community.llms import Ollama
from tools.content_extractor import ContentExtractor
def create_content_analyst(llm: Ollama, extractor: ContentExtractor) -> Agent:
return Agent(
role='Content Analyst',
goal='Extract, analyze, and organize key insights from research sources',
backstory='''You are a senior research analyst with expertise in
synthesizing information from diverse sources. You excel at:
- Identifying the most important claims and findings
- Extracting supporting evidence and data
- Recognizing patterns across sources
- Noting methodology and limitations
- Distinguishing fact from opinion''',
llm=llm,
tools=[extractor.extract_key_points, extractor.compare_sources],
verbose=True
)
def create_analysis_task(agent: Agent, sources: str):
return f"""Analyze these research sources:
{sources}
For each source, extract:
1. **Key Claims**
- Main arguments or findings
- Confidence level of each claim
2. **Evidence**
- Data and statistics cited
- Case studies or examples
- Expert opinions referenced
3. **Methodology**
- How conclusions were reached
- Any limitations acknowledged
4. **Relevance**
- How this connects to the research questions
- Unique insights provided
After analyzing all sources, provide:
- Cross-source patterns
- Points of agreement
- Contradictions or tensions
- Information gaps remaining"""
Fact Checker Agent
# agents/fact_checker.py
from crewai import Agent
from langchain_community.llms import Ollama
from tools.linkup_search import LinkupSearchTool
def create_fact_checker(llm: Ollama, search_tool: LinkupSearchTool) -> Agent:
return Agent(
role='Fact Checker',
goal='Verify claims and ensure research accuracy',
backstory='''You are a rigorous fact-checker with a background in
investigative journalism. You:
- Question all claims, especially extraordinary ones
- Seek primary sources when possible
- Cross-reference facts across multiple sources
- Identify potential biases and conflicts of interest
- Distinguish between established facts and emerging findings''',
llm=llm,
tools=[search_tool.search, search_tool.academic_search],
verbose=True
)
def create_fact_checking_task(agent: Agent, claims: str):
return f"""Verify the key claims from this analysis:
{claims}
For each major claim:
1. **Verification Attempt**
- Search for corroborating sources
- Look for contradicting evidence
- Check for primary sources
2. **Confidence Assessment**
- Verified: Multiple reliable sources confirm
- Likely: Good evidence but not definitive
- Uncertain: Conflicting evidence or limited sources
- Unverified: Could not find corroboration
3. **Context**
- Any important nuances or caveats
- How recent is this information
- Any known controversies
Report:
- Claims that passed verification
- Claims needing qualification
- Claims that could not be verified
- Any corrections needed"""
Synthesizer Agent
# agents/synthesizer.py
from crewai import Agent
from langchain_community.llms import Ollama
def create_synthesizer(llm: Ollama) -> Agent:
return Agent(
role='Research Synthesizer',
goal='Combine research findings into coherent, insightful conclusions',
backstory='''You are a master synthesizer who can weave together
diverse information into clear narratives. You:
- See connections others miss
- Balance different perspectives fairly
- Identify the most significant insights
- Acknowledge uncertainty appropriately
- Create structured, logical arguments''',
llm=llm,
verbose=True
)
def create_synthesis_task(agent: Agent, analysis: str, fact_check: str):
return f"""Synthesize the research findings:
Analysis Results:
{analysis}
Fact Check Results:
{fact_check}
Create a synthesis that:
1. **Key Findings**
- The most important discoveries
- How they answer the research questions
- Confidence level for each finding
2. **Integrated Understanding**
- How different findings connect
- The overall picture that emerges
- What this means for the topic
3. **Nuances and Caveats**
- Important qualifications
- Areas of uncertainty
- Conflicting perspectives and how to reconcile
4. **Implications**
- What follows from these findings
- Practical applications
- Areas for future research
5. **Confidence Summary**
- What we can say confidently
- What remains uncertain
- What we cannot determine from available sources"""
Report Writer Agent
# agents/writer.py
from crewai import Agent
from langchain_community.llms import Ollama
from tools.content_extractor import CitationManager
def create_report_writer(llm: Ollama, citation_manager: CitationManager) -> Agent:
return Agent(
role='Research Report Writer',
goal='Produce clear, well-structured research reports',
backstory='''You are an expert technical writer who creates
compelling research reports. You:
- Write clearly for the intended audience
- Structure information logically
- Use evidence effectively
- Cite sources properly
- Balance detail with readability''',
llm=llm,
tools=[citation_manager.add_citation, citation_manager.get_citations],
verbose=True
)
def create_writing_task(agent: Agent, synthesis: str, format: str = "markdown"):
return f"""Write a research report based on this synthesis:
{synthesis}
Report Requirements:
1. **Executive Summary**
- Key findings in 2-3 paragraphs
- Main conclusions
- Confidence levels
2. **Introduction**
- Topic overview
- Research questions addressed
- Scope and methodology
3. **Findings**
- Organized by theme or question
- Evidence and citations for claims
- Visual aids where helpful (describe charts/tables)
4. **Analysis**
- What the findings mean
- Connections and patterns
- Implications
5. **Limitations**
- What this research cannot conclude
- Areas of uncertainty
- Potential biases
6. **Conclusions**
- Summary of key insights
- Recommendations if applicable
- Future research directions
7. **References**
- All sources cited properly
Output format: {format}
Ensure all claims are properly cited."""
Orchestrating the Research Crew
Main Workflow
# workflows/research_crew.py
from crewai import Crew, Process, Task
from langchain_community.llms import Ollama
from agents.planner import create_research_planner, create_planning_task
from agents.scout import create_source_scout, create_scouting_task
from agents.analyst import create_content_analyst, create_analysis_task
from agents.fact_checker import create_fact_checker, create_fact_checking_task
from agents.synthesizer import create_synthesizer, create_synthesis_task
from agents.writer import create_report_writer, create_writing_task
from tools.linkup_search import LinkupSearchTool
from tools.content_extractor import ContentExtractor, CitationManager
from config.settings import settings
class DeepResearcher:
def __init__(self):
# Initialize LLM
self.llm = Ollama(
model=settings.ollama_model,
base_url=settings.ollama_base_url
)
# Initialize tools
self.search_tool = LinkupSearchTool(settings.linkup_api_key)
self.extractor = ContentExtractor()
self.citation_manager = CitationManager()
# Initialize agents
self.planner = create_research_planner(self.llm)
self.scout = create_source_scout(self.llm, self.search_tool)
self.analyst = create_content_analyst(self.llm, self.extractor)
self.fact_checker = create_fact_checker(self.llm, self.search_tool)
self.synthesizer = create_synthesizer(self.llm)
self.writer = create_report_writer(self.llm, self.citation_manager)
def research(self, topic: str, depth: str = "comprehensive") -> str:
"""Execute full research workflow."""
# Phase 1: Planning
planning_task = Task(
description=create_planning_task(self.planner, topic, depth),
expected_output="Detailed research plan with questions and search strategy",
agent=self.planner
)
# Phase 2: Source Discovery
scouting_task = Task(
description=create_scouting_task(self.scout, "{planning_output}"),
expected_output="Curated list of sources with content",
agent=self.scout,
context=[planning_task]
)
# Phase 3: Analysis
analysis_task = Task(
description=create_analysis_task(self.analyst, "{scouting_output}"),
expected_output="Extracted insights and cross-source analysis",
agent=self.analyst,
context=[scouting_task]
)
# Phase 4: Fact Checking
fact_check_task = Task(
description=create_fact_checking_task(self.fact_checker, "{analysis_output}"),
expected_output="Verified claims with confidence levels",
agent=self.fact_checker,
context=[analysis_task]
)
# Phase 5: Synthesis
synthesis_task = Task(
description=create_synthesis_task(
self.synthesizer,
"{analysis_output}",
"{fact_check_output}"
),
expected_output="Integrated research synthesis",
agent=self.synthesizer,
context=[analysis_task, fact_check_task]
)
# Phase 6: Report Writing
writing_task = Task(
description=create_writing_task(
self.writer,
"{synthesis_output}",
settings.output_format
),
expected_output="Complete research report",
agent=self.writer,
context=[synthesis_task]
)
# Create and run crew
crew = Crew(
agents=[
self.planner,
self.scout,
self.analyst,
self.fact_checker,
self.synthesizer,
self.writer
],
tasks=[
planning_task,
scouting_task,
analysis_task,
fact_check_task,
synthesis_task,
writing_task
],
process=Process.sequential,
verbose=True
)
result = crew.kickoff()
return str(result)
def quick_research(self, topic: str) -> str:
"""Faster research with fewer agents."""
# Simplified workflow: plan, search, analyze, write
planning_task = Task(
description=create_planning_task(self.planner, topic, "quick"),
expected_output="Quick research plan",
agent=self.planner
)
scouting_task = Task(
description=create_scouting_task(self.scout, "{planning_output}"),
expected_output="Key sources",
agent=self.scout,
context=[planning_task]
)
# Combined analysis and writing
combined_task = Task(
description=f"""Analyze these sources and write a brief report:
{{scouting_output}}
Create a concise report with:
1. Key findings (bullet points)
2. Main insights (2-3 paragraphs)
3. Sources used
Keep it under 1000 words.""",
expected_output="Brief research report",
agent=self.writer,
context=[scouting_task]
)
crew = Crew(
agents=[self.planner, self.scout, self.writer],
tasks=[planning_task, scouting_task, combined_task],
process=Process.sequential,
verbose=True
)
return str(crew.kickoff())
Main Entry Point
# main.py
from workflows.research_crew import DeepResearcher
from config.settings import settings
import argparse
from datetime import datetime
def main():
parser = argparse.ArgumentParser(description="Deep Research Agent")
parser.add_argument("topic", help="Research topic")
parser.add_argument(
"--depth",
choices=["quick", "standard", "comprehensive"],
default="standard",
help="Research depth"
)
parser.add_argument(
"--output",
help="Output file path",
default=None
)
args = parser.parse_args()
print(f"Starting research on: {args.topic}")
print(f"Depth: {args.depth}")
print("-" * 50)
researcher = DeepResearcher()
if args.depth == "quick":
result = researcher.quick_research(args.topic)
else:
result = researcher.research(args.topic, args.depth)
# Save output
if args.output:
output_path = args.output
else:
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
safe_topic = args.topic[:30].replace(" ", "_")
output_path = f"output/{safe_topic}_{timestamp}.md"
with open(output_path, "w") as f:
f.write(result)
print(f"\nResearch complete! Report saved to: {output_path}")
if __name__ == "__main__":
main()
Running the Researcher
Basic Usage
# Quick research
python main.py "impact of AI on healthcare" --depth quick
# Standard research
python main.py "renewable energy adoption barriers" --depth standard
# Comprehensive research
python main.py "future of quantum computing" --depth comprehensive
# Specify output file
python main.py "blockchain in supply chain" --output reports/blockchain_research.md
Example Output
# Research Report: Impact of AI on Healthcare
## Executive Summary
Artificial intelligence is transforming healthcare across diagnostics,
treatment planning, and operational efficiency. Key findings indicate...
[Confidence: High - based on 12 verified sources]
## Introduction
This research investigates the current state and future trajectory of
AI applications in healthcare systems...
## Key Findings
### 1. Diagnostic Accuracy
AI-powered diagnostic tools show significant improvements in accuracy
for specific conditions:
- Medical imaging analysis: 94% accuracy in detecting certain cancers [1]
- Preliminary diagnosis: Reduction in diagnostic errors by 30% [2]
### 2. Treatment Optimization
...
## Analysis
The convergence of several factors is accelerating AI adoption:
...
## Limitations
This research relies primarily on published studies and may not reflect
the most recent developments. Additionally...
## Conclusions
AI in healthcare represents a transformative opportunity with substantial
evidence of benefits. However, challenges around data privacy, algorithmic
bias, and integration with existing systems must be addressed...
## References
[1] Smith et al. "AI in Medical Imaging" Nature Medicine, 2024
URL: https://nature.com/...
[2] Healthcare AI Consortium. "Diagnostic AI Impact Study" 2024
URL: https://healthcareai.org/...
Advanced Features
Parallel Source Processing
async def parallel_scouting(self, queries: List[str]) -> List[Dict]:
"""Process multiple search queries in parallel."""
import asyncio
async def search_query(query):
return self.search_tool.search(query)
tasks = [search_query(q) for q in queries]
results = await asyncio.gather(*tasks)
return results
Iterative Deepening
def iterative_research(self, topic: str, max_iterations: int = 3) -> str:
"""Research with iterative deepening on uncertain areas."""
all_findings = []
for iteration in range(max_iterations):
if iteration == 0:
# Initial broad research
result = self.quick_research(topic)
else:
# Identify gaps and research deeper
gaps = self._identify_gaps(all_findings)
if not gaps:
break
result = self._research_gaps(gaps)
all_findings.append(result)
return self._compile_findings(all_findings)
Source Quality Scoring
def score_source_quality(self, source: Dict) -> float:
"""Score source quality based on multiple factors."""
score = 0.5 # Base score
# Domain authority
trusted_domains = ['.edu', '.gov', 'nature.com', 'science.org']
if any(d in source['url'] for d in trusted_domains):
score += 0.2
# Recency
if source.get('date'):
age_days = (datetime.now() - source['date']).days
if age_days < 365:
score += 0.15
elif age_days < 730:
score += 0.1
# Citation count (if available)
if source.get('citations', 0) > 100:
score += 0.15
return min(score, 1.0)
Summary
Multi-agent research systems achieve depth and rigor that single agents cannot match. By distributing research responsibilities across specialized agents, you get:
- Thorough coverage: Dedicated agents for each research phase
- Quality control: Fact-checking agent validates claims
- Balanced perspectives: Scout finds diverse sources
- Clear output: Writer agent produces polished reports
Key implementation points:
- Use local LLMs: Ollama provides cost-effective inference for research tasks
- Layer the workflow: Planning, discovery, analysis, verification, synthesis
- Track citations: Maintain proper attribution throughout
- Allow iteration: Some topics need multiple passes
This architecture extends naturally. Add domain-specific expert agents. Integrate with more data sources. Implement collaborative human-agent review. The multi-agent foundation makes these extensions straightforward.
Ready to build financial analysis capabilities? Continue to Building a Financial Analyst Agent with MCP to learn how to create agents that analyze financial data and markets.