Making Documentation AI-Readable
Convert your docs to llms.txt format so AI tools can consume them. Practical guide to structured documentation for AI consumption.
Documentation has always served one audience: human developers who read linearly, follow hyperlinks, and infer meaning from context. But in 2026, a growing share of your documentation's "readers" are AI tools -- Claude Code, Copilot, Cursor, and dozens of others that need to understand your API, your conventions, and your architecture.
These AI tools read your docs differently than humans. They do not follow hyperlinks. They do not skim headers. They consume the entire text and extract structured meaning. If your documentation is optimized for human browsing (navigation menus, collapsible sections, interactive examples), it is suboptimal for AI consumption.
The solution is llms.txt -- a standard format for making your documentation AI-readable without changing the human-facing version.
Key Takeaways
- llms.txt is a machine-readable index of your documentation placed at the root of your website, similar to robots.txt and sitemap.xml
- AI tools spend fewer tokens understanding well-structured documentation, reducing both cost and latency for users of your API
- The format is simple markdown with a specific structure: title, summary, sections with links, and optional detailed content
- You do not replace your existing docs -- llms.txt is an additional layer that points AI tools to the right content
- Early adopters report 30-40% fewer incorrect API calls from AI coding assistants after adding llms.txt
What Is llms.txt?
The llms.txt standard (proposed at llmstxt.org) defines a file at yourdomain.com/llms.txt that provides AI tools with a structured overview of your documentation. Think of it as a table of contents optimized for machine consumption.
# Your Project Name
> Brief one-line description of what this project does.
## Docs
- [Getting Started](https://docs.example.com/getting-started): Setup and installation
- [API Reference](https://docs.example.com/api): Complete API documentation
- [Authentication](https://docs.example.com/auth): Auth flows and token management
- [Error Handling](https://docs.example.com/errors): Error codes and recovery
## Optional
- [Changelog](https://docs.example.com/changelog): Recent changes
- [Migration Guide](https://docs.example.com/migration): Version upgrade paths
The format is intentionally simple. AI tools parse it to understand what documentation exists and where to find detailed information about specific topics.
llms-full.txt
For smaller documentation sets, you can also provide llms-full.txt -- a single file containing all your documentation in one place. This is ideal for projects with under 50 pages of docs, where the full content fits within a model's context window.
# Your Project Name
> Brief description.
## Getting Started
Full getting started content here...
## API Reference
Full API reference here...
Why Your Current Docs Are Not Enough
The Token Cost Problem
When an AI tool needs to understand your API, it typically fetches your documentation pages and includes them in the context window. If your docs are HTML-heavy with navigation, sidebars, footers, and scripts, the AI is paying tokens for content that adds zero value.
A typical documentation page might be 15KB of HTML but only 3KB of actual content. The AI processes all 15KB, paying for 12KB of navigation chrome.
llms.txt (and especially llms-full.txt) strips away the chrome and delivers pure content. The AI gets better information at lower token cost.
The Navigation Problem
AI tools cannot click links. When your API reference says "see the authentication guide for details," a human clicks the link. An AI tool either ignores the reference or fetches the linked page, adding latency and tokens.
llms.txt eliminates this by providing either a flat index (llms.txt) or all content in one file (llms-full.txt). No navigation required.
The Structure Problem
Human-readable docs use visual hierarchy (larger fonts, colors, spacing) to convey importance. AI tools parse text, not visual design. The heading "Authentication" in 24px bold blue looks important to a human but is identical to any other text for an AI parser unless it uses proper markdown structure.
Building Your llms.txt
Step 1: Audit Your Documentation
List every documentation page and categorize it:
Essential (include in llms.txt):
- Getting started
- API reference
- Authentication
- Core concepts
- Error codes
Useful (include in Optional section):
- Changelog
- Migration guides
- FAQ
- Troubleshooting
Skip (not useful for AI):
- Team page
- Blog posts
- Legal/privacy policy
- Community links
Step 2: Write Concise Descriptions
Each link in llms.txt needs a one-line description that helps the AI decide whether to fetch the full page.
# Bad: too vague
- [API](https://docs.example.com/api): API documentation
# Good: specific and useful
- [REST API Reference](https://docs.example.com/api): Endpoints, request/response schemas, rate limits, and authentication headers for all v2 API routes
Step 3: Create the File
# Supabase
> Open-source Firebase alternative. Postgres database, authentication,
> instant APIs, edge functions, realtime subscriptions, and storage.
## Docs
- [Quick Start](https://supabase.com/docs/guides/getting-started): Project setup, client library installation, first query
- [Database](https://supabase.com/docs/guides/database): Table creation, RLS policies, migrations, functions
- [Auth](https://supabase.com/docs/guides/auth): Email/password, OAuth, magic links, session management
- [Storage](https://supabase.com/docs/guides/storage): File uploads, public/private buckets, CDN
- [Edge Functions](https://supabase.com/docs/guides/functions): Deno-based serverless functions
- [Realtime](https://supabase.com/docs/guides/realtime): Postgres changes, presence, broadcast
## API
- [JavaScript Client](https://supabase.com/docs/reference/javascript): supabase-js methods, types, examples
- [REST API](https://supabase.com/docs/guides/api): Auto-generated REST endpoints from your schema
## Optional
- [Self-Hosting](https://supabase.com/docs/guides/self-hosting): Docker setup, configuration
- [CLI](https://supabase.com/docs/guides/cli): Local development, migrations, testing
- [Pricing](https://supabase.com/pricing): Plans, limits, usage-based pricing
Step 4: Deploy
Place llms.txt at the root of your domain:
https://yourdomain.com/llms.txthttps://docs.yourdomain.com/llms.txt
For Next.js applications, create the file in the public/ directory:
public/
├── llms.txt
├── llms-full.txt (optional)
├── robots.txt
├── sitemap.xml
└── favicon.ico
Converting Existing Docs to AI-Readable Format
If you have extensive documentation, creating llms-full.txt manually is tedious. Automate it.
// scripts/generate-llms-full.ts
import { readdir, readFile } from 'fs/promises'
import { join } from 'path'
async function generateLlmsFull(docsDir: string): Promise<string> {
const sections: string[] = []
sections.push('# Project Documentation\n')
sections.push('> Complete documentation for AI consumption.\n')
const files = await readdir(docsDir, { recursive: true })
const mdFiles = files.filter(f => f.endsWith('.md') || f.endsWith('.mdx'))
for (const file of mdFiles) {
const content = await readFile(join(docsDir, file), 'utf-8')
const stripped = stripFrontmatter(content)
sections.push(stripped)
sections.push('\n---\n')
}
return sections.join('\n')
}
function stripFrontmatter(content: string): string {
return content.replace(/^---[\s\S]*?---\n/, '')
}
Run this script as part of your build process to keep llms-full.txt in sync with your documentation.
Measuring Impact
After deploying llms.txt, measure its impact by tracking AI tool access patterns.
// middleware.ts (Next.js)
import { NextResponse } from 'next/server'
import type { NextRequest } from 'next/server'
export function middleware(request: NextRequest) {
if (request.nextUrl.pathname === '/llms.txt' ||
request.nextUrl.pathname === '/llms-full.txt') {
// Track AI tool access
const userAgent = request.headers.get('user-agent') || ''
const isAITool = /claude|openai|anthropic|cursor|copilot/i.test(userAgent)
if (isAITool) {
// Log the access for analytics
console.log(`AI tool accessed llms.txt: ${userAgent}`)
}
}
return NextResponse.next()
}
For more on analytics approaches, see our guide on privacy-first analytics.
FAQ
Is llms.txt an official standard?
It is a community-proposed standard, not an RFC or W3C recommendation. However, it has gained traction among AI tool developers and documentation platforms. The specification is simple enough that adoption is low-risk.
Will AI tools automatically find my llms.txt?
Some tools are starting to check for llms.txt. Others require manual configuration. As adoption grows, automatic discovery will become standard. In the meantime, mentioning llms.txt in your regular documentation helps AI tools find it.
Should I include sensitive information in llms.txt?
No. llms.txt is publicly accessible. Only include links to public documentation. Do not include internal API endpoints, credentials, or proprietary architecture details.
How often should I update llms.txt?
Update it whenever you add, remove, or significantly restructure documentation pages. For most projects, this means updating monthly or with each major release. Automating generation from your docs source ensures it stays current.
Does this replace structured data like OpenAPI specs?
No. llms.txt and OpenAPI serve different purposes. OpenAPI defines your API contract (endpoints, parameters, response schemas). llms.txt provides an overview of all your documentation including conceptual guides, tutorials, and architectural documentation that OpenAPI does not cover. Use both. See the CLI commands reference for how Claude Code uses structured documentation.
Explore production-ready AI skills at aiskill.market/browse or submit your own skill to the marketplace.
Sources
- llms.txt Specification - The proposed standard for AI-readable documentation
- Anthropic Claude Documentation - How Claude processes documentation
- Google Developers - Structured Data - Complementary structured data standards