Faster Linking for AI-Heavy Projects
Build optimization for AI-dependent codebases. Reduce link times, optimize dependency graphs, and speed up the edit-compile-run cycle for projects with large AI libraries.
Faster Linking for AI-Heavy Projects
The edit-compile-run cycle in AI projects has a bottleneck most developers never think about: linking. Compilation happens per-file and runs in parallel. Linking happens at the end, sequentially, and must process every symbol in every dependency.
AI-heavy projects have unusually large dependency trees. The Anthropic SDK, tokenization libraries, data processing packages, and their transitive dependencies add thousands of symbols that the linker must resolve. A project that compiles in 2 seconds can spend 15+ seconds linking.
This tutorial identifies the sources of slow linking and provides concrete optimizations that cut link times by 70% or more.
Key Takeaways
- Linking is sequential and cannot be parallelized -- it's the true bottleneck in modern build pipelines
- AI SDKs bring large transitive dependency trees that the linker must process even if your code uses a fraction of the functionality
- Tree-shaking eliminates unused exports at build time, reducing the linker's workload
- Dynamic imports defer loading of heavy dependencies until they're actually needed
- Bundle analysis reveals which dependencies dominate link time so you can target optimization efforts
Understanding the Link Bottleneck
Why Linking Is Slow
Compilation transforms source files into object files in parallel. If you have 100 source files and 8 CPU cores, compilation runs 8 files simultaneously.
Linking takes all object files and combines them into the final output. This step is inherently sequential because it must:
- Resolve every symbol reference (function calls, variable references)
- Verify no duplicate or missing symbols
- Order the code sections for the output binary
- Generate source maps for debugging
For JavaScript/TypeScript projects bundled by webpack or Turbopack, "linking" is the bundling phase where modules are combined into chunks. The same sequential bottleneck applies.
How AI Dependencies Make It Worse
A typical AI project's dependency tree:
@anthropic-ai/sdk
├── @anthropic-ai/core (HTTP client, types)
├── node-fetch (if Node.js < 18)
├── form-data
└── abort-controller
langchain (if used)
├── @langchain/core
├── @langchain/community
├── uuid
├── yaml
├── p-retry
├── p-queue
└── ... (50+ transitive dependencies)
Each dependency adds symbols to the linker's resolution table. The @anthropic-ai/sdk is relatively lean. But common AI tooling stacks with LangChain, vector stores, and embedding libraries can add hundreds of packages.
Measuring Link Time
Add timing to your build:
# Next.js build with timing
NEXT_TELEMETRY_DEBUG=1 time npx next build
# Webpack with timing
npx webpack --profile --progress
# Turbopack (Next.js dev)
NEXT_PRIVATE_TRACE=1 next dev --turbopack
The output shows time spent in each phase. If linking/bundling exceeds compilation time, optimization is worthwhile.
Optimization 1: Tree Shaking
What It Does
Tree shaking eliminates unused exports from your bundle. If you import one function from a library that exports 200, tree shaking removes the other 199.
Enabling It
Tree shaking requires ESM imports (not CommonJS require):
// Tree-shakeable: bundler can remove unused exports
import { Anthropic } from '@anthropic-ai/sdk'
// NOT tree-shakeable: bundler must include everything
const sdk = require('@anthropic-ai/sdk')
Verify your dependencies support tree shaking by checking their package.json for "module" or "exports" fields.
Next.js Configuration
Next.js enables tree shaking by default in production builds. But you can help it by marking packages as side-effect-free:
// next.config.js
const nextConfig = {
experimental: {
optimizePackageImports: [
'@anthropic-ai/sdk',
'lucide-react',
'@supabase/supabase-js',
],
},
}
The optimizePackageImports option tells Next.js to aggressively tree-shake these packages, even if their package.json doesn't declare them as side-effect-free.
Optimization 2: Dynamic Imports
Deferring Heavy Dependencies
AI libraries that are only used in specific routes or features should be dynamically imported:
// Before: imported at module level, linked into every page
import Anthropic from '@anthropic-ai/sdk'
export async function analyzeSkill(skillId: string) {
const client = new Anthropic()
return await client.messages.create({ /* ... */ })
}
// After: imported on demand, only linked into pages that use this function
export async function analyzeSkill(skillId: string) {
const { default: Anthropic } = await import('@anthropic-ai/sdk')
const client = new Anthropic()
return await client.messages.create({ /* ... */ })
}
Dynamic imports create separate chunks that are loaded only when the function is called. Pages that don't use AI analysis don't pay the link-time cost.
Route-Level Code Splitting
In Next.js, each route is automatically code-split. But shared dependencies are bundled into a common chunk. If the AI SDK is imported by a server action used on multiple pages, it ends up in the shared chunk, increasing the initial bundle for all pages.
// app/actions/ai-analyze.ts -- only imported by pages that need it
'use server'
export async function analyzeWithAI(input: string) {
const { default: Anthropic } = await import('@anthropic-ai/sdk')
// ...
}
Server Actions with dynamic imports keep the AI SDK out of the client bundle entirely. The import happens server-side when the action is called.
Optimization 3: Bundle Analysis
Finding the Biggest Dependencies
Use @next/bundle-analyzer to visualize what's in your bundles:
npm install @next/bundle-analyzer
// next.config.js
const withBundleAnalyzer = require('@next/bundle-analyzer')({
enabled: process.env.ANALYZE === 'true',
})
module.exports = withBundleAnalyzer(nextConfig)
ANALYZE=true npx next build
This opens an interactive treemap showing every module's size. Common findings in AI projects:
@anthropic-ai/sdkis typically 50-100KB (reasonable)- Icon libraries (full import) can be 200KB+ (tree-shake!)
- Date libraries can be 70KB+ (consider
date-fnsovermoment) - JSON schema validators can be 100KB+ (dynamically import)
The 80/20 Rule
Usually, 3-5 dependencies account for 80% of bundle size. Focus optimization on those:
# Quick check: which packages are biggest?
du -sh node_modules/* | sort -rh | head -20
This shows on-disk size, which correlates with bundled size. Packages over 5MB on disk deserve investigation.
Optimization 4: Dependency Deduplication
The Duplicate Dependency Problem
npm can install multiple versions of the same package if different dependencies require different version ranges:
# Check for duplicates
npm ls --all | grep -E "^\s+.*@[0-9]" | sort | uniq -d
Duplicates double the linking work for that package. Fix with:
# Deduplicate where possible
npm dedupe
Package Overrides
Force a single version when duplicates aren't automatically resolvable:
{
"overrides": {
"typescript": "5.5.0"
}
}
Use overrides cautiously -- forcing incompatible versions can cause runtime errors. But for minor version differences (5.4.x vs 5.5.x), overrides are safe and reduce bundle size.
Optimization 5: External Dependencies
Excluding from Bundle
Dependencies that only run server-side don't need to be bundled for the client:
// next.config.js
const nextConfig = {
serverExternalPackages: [
'@anthropic-ai/sdk',
'sharp',
'puppeteer',
],
}
This tells Next.js to use the node_modules version at runtime instead of bundling these packages. The linker skips them entirely for client bundles.
Native Modules
Packages with native bindings (like sharp for image processing or better-sqlite3 for databases) can't be bundled at all. Mark them as external to prevent build errors:
const nextConfig = {
webpack: (config, { isServer }) => {
if (isServer) {
config.externals.push('sharp', 'better-sqlite3')
}
return config
},
}
Measuring Improvement
After applying optimizations, measure the impact:
# Before
time npx next build
# Build time: 47s (18s linking)
# After tree shaking + dynamic imports + externals
time npx next build
# Build time: 22s (3s linking)
The improvement compounds with every build. For a team running 50+ builds per day (local + CI), saving 25 seconds per build saves over 20 minutes of total wait time daily.
For projects that also implement build cache strategies, the combined optimization can reduce build times by 85%.
FAQ
Does tree shaking work with CommonJS modules?
Limited. Webpack can tree-shake some CommonJS patterns, but ESM imports are much more reliably tree-shaken. Prefer ESM imports for all AI dependencies.
Will dynamic imports slow down my application?
The first call to a dynamically imported module has a loading delay (typically 10-50ms for server-side imports). Subsequent calls use the cached module. For server actions and API routes, this delay is negligible.
How do I know if my dependencies support tree shaking?
Check the package's package.json for "module" or "exports" fields. Packages that only have "main" (pointing to a CommonJS file) don't support tree shaking. The @anthropic-ai/sdk supports tree shaking.
Should I use Turbopack or Webpack for AI projects?
Turbopack is faster for development (module-level caching). Webpack is more mature for production builds. Next.js 15 supports both. Use Turbopack for next dev and the default bundler for next build.
Can I pre-link dependencies that rarely change?
Yes. DLL plugins (webpack) and pre-bundling (Vite/Turbopack) compile stable dependencies once and link them as pre-built modules. This eliminates redundant processing of unchanged packages.
Sources
- Webpack Tree Shaking -- Official tree shaking documentation
- Next.js Build Optimization -- Framework-specific build optimization
- npm Dedupe Documentation -- Dependency deduplication
- Turbopack Architecture -- Module-level caching for faster builds
Explore production-ready AI skills at aiskill.market/browse or submit your own skill to the marketplace.