How AI Builds Apps 20x Faster
A first-person account of building a full-stack application with AI in 3 days instead of 60. What worked, what failed, and the real multiplier.
Everybody claims AI makes development faster. Few people show the receipts. This is a detailed, hour-by-hour account of building a full-stack web application with Claude Code -- what worked on the first prompt, what required five iterations, and what I ended up writing by hand because AI could not get it right.
The project: a skill marketplace with search, filtering, detail pages, user submissions, and admin review. Next.js 15, Supabase, Tailwind CSS, deployed to Vercel. The kind of application that, in my pre-AI workflow, would have taken 8-10 weeks of part-time development.
It took 3 days. Here is how.
Key Takeaways
- The 20x speed claim holds for CRUD-heavy applications where most features follow well-established patterns
- The actual multiplier varies dramatically by task type -- data fetching is 50x faster, complex state management is only 3x faster
- 60% of the time savings come from eliminating research and context switching, not from faster code generation
- AI excels at the boring middle -- the boilerplate, the integration code, the repeated patterns -- freeing you for the interesting edges
- The remaining 15% of manual work is the 15% that determines whether the app is good or merely functional
Day 1: Foundation (8 Hours)
Hour 1: Project Setup (Normally: 4 Hours)
claude "Create a new Next.js 15 project with App Router, Tailwind CSS, Shadcn UI,
and Supabase. Set up the project structure following the conventions in CLAUDE.md"
Claude scaffolded the entire project in under 3 minutes. Directory structure, configuration files, Supabase client setup, TypeScript config, ESLint config. All of it followed the conventions I specified.
What would have taken me 4 hours (including researching the latest Next.js 15 App Router patterns, setting up Supabase auth, configuring Tailwind plugins) took 3 minutes of Claude's time and 20 minutes of my review time.
Speed multiplier: ~10x
Hour 2-3: Database Schema and Queries (Normally: 8 Hours)
I described the data model in plain language:
"Create a Supabase schema for a skill marketplace. Skills have titles, descriptions,
types (command, skill, plugin), categories, install counts, quality scores, and
belong to creators. Creators have names, GitHub usernames, and verification status."
Claude generated the SQL migration, TypeScript types, and Supabase query functions. It also generated Row Level Security policies, which I would have forgotten to do until later.
I spent an hour reviewing the schema, adjusting a few column types, and adding two indexes Claude had not thought of. But the structure was 90% correct on the first try.
Speed multiplier: ~4x (schema design needs human judgment)
Hour 4-6: Core Pages (Normally: 20 Hours)
This was where the speed advantage was most dramatic. Five pages, each with data fetching, layout, and components:
- Home page with hero section and featured skills
- Browse page with grid layout
- Skill detail page with all metadata
- Submit page with form
- About page
Each page took about 20 minutes: 2 minutes for Claude to generate, 18 minutes for me to review, adjust styling, and test. The components followed the Shadcn UI patterns I specified in CLAUDE.md, so consistency was automatic.
Speed multiplier: ~10x
Hour 7-8: Search and Filtering (Normally: 12 Hours)
Search and filtering required more iteration. The first version used client-side filtering, which I redirected to Supabase full-text search. The second version worked but had a UX issue with the filter state not persisting across navigation. The third version got it right.
Three iterations, each taking about 10 minutes, plus 30 minutes of testing and edge case fixes. This is the pattern for anything more complex than basic CRUD: it works on the second or third try, not the first.
Speed multiplier: ~6x
Day 2: Polish and Features (8 Hours)
Hour 9-10: Component Refinement (Normally: 10 Hours)
Day 2 started with making everything look good. Card hover effects, loading states, error states, empty states, responsive adjustments. These are the details that separate a prototype from a product.
Claude handled each one individually:
"Add a loading skeleton to SkillCard that matches the card's layout"
"Add an empty state to the browse page when no skills match the filter"
"Make the skill detail page responsive for mobile"
Each request took 1-2 minutes to implement and 5 minutes to review. Over two hours, I shipped 15 UI refinements that would have been my entire afternoon in the old workflow.
Speed multiplier: ~5x
Hour 11-12: Server Actions (Normally: 8 Hours)
Form submission, skill creation, install tracking. Each server action followed a similar pattern: validate input, insert into Supabase, revalidate the cache, return a result.
Claude generated all three server actions in about 10 minutes. I spent an hour adding input validation that Claude's first version was too lenient on (it accepted any string length for titles -- I added a 60-character limit) and error handling for edge cases.
Speed multiplier: ~4x
Hour 13-15: Social Sharing and SEO (Normally: 10 Hours)
OpenGraph images, meta tags, social share buttons, sitemap generation, robots.txt. This is the category of work I call "necessary but tedious" -- important for launch but not intellectually stimulating.
AI crushed this. Each feature was a single prompt with a correct first implementation. The SEO metadata followed Next.js 15 conventions perfectly. The sitemap generator iterated over all skills from Supabase and produced a valid XML sitemap.
Speed multiplier: ~15x (tedious, pattern-heavy work is AI's sweet spot)
Hour 16: Authentication (Normally: 6 Hours)
Supabase Auth with GitHub OAuth. This was a single prompt with two follow-up adjustments (CORS configuration and redirect URL). The auth flow worked end-to-end in 30 minutes.
Speed multiplier: ~12x
Day 3: Edge Cases and Deployment (6 Hours)
Hour 17-18: Error Handling (Normally: 8 Hours)
Every API call can fail. Every database query can return empty. Every user input can be malicious. Day 3 was about making the app resilient.
I worked through each page systematically, asking Claude to add error boundaries, fallback UIs, and input sanitization. This was methodical work -- not creative, not exciting, but essential.
Claude's error handling was acceptable but not excellent. I manually upgraded several error messages from generic "something went wrong" to specific, actionable messages. This is the kind of polish AI does not do well because it requires understanding the user's mental model.
Speed multiplier: ~4x
Hour 19-20: Testing (Normally: 12 Hours)
I wrote test descriptions and had Claude implement them:
"Write tests for the search functionality: empty query returns all skills,
filtering by type returns only matching skills, search query matches title
and description"
Claude generated the test files, including mock data and Supabase mock setup. The tests passed on the first run for simple cases and required one round of fixes for edge cases involving async state.
For more on AI-assisted testing, see our testing skills guide.
Speed multiplier: ~6x
Hour 21-22: Deployment (Normally: 4 Hours)
Vercel deployment with environment variables, Supabase production setup, domain configuration. Two prompts and 30 minutes of reviewing the Vercel dashboard.
Speed multiplier: ~8x
The Honest Numbers
| Task Category | Manual Estimate | With AI | Multiplier |
|---|---|---|---|
| Project setup | 4 hours | 0.5 hours | 8x |
| Database schema | 8 hours | 2 hours | 4x |
| Core pages | 20 hours | 3 hours | 7x |
| Search/filtering | 12 hours | 2 hours | 6x |
| UI polish | 10 hours | 2 hours | 5x |
| Server actions | 8 hours | 2 hours | 4x |
| SEO/social | 10 hours | 0.7 hours | 15x |
| Authentication | 6 hours | 0.5 hours | 12x |
| Error handling | 8 hours | 2 hours | 4x |
| Testing | 12 hours | 2 hours | 6x |
| Deployment | 4 hours | 0.5 hours | 8x |
| Total | 102 hours | 17.2 hours | ~6x |
Wait -- 6x, not 20x? The 20x claim comes from calendar time. Working solo at 10 hours per week, 102 hours is about 10 weeks. I completed it in 3 days (22 hours). That is the 20x multiplier -- not in raw hours, but in elapsed time, because AI lets you sustain a much higher effective rate.
What I Built Manually
About 15% of the code was written by hand. These were the moments where AI could not match what I needed.
Custom animations. The hover effects on skill cards needed specific timing and easing that Claude kept getting subtly wrong. After three attempts, it was faster to write the CSS myself.
Business logic edge cases. The quality scoring algorithm had domain-specific rules that I could not explain well enough in a prompt. I wrote the scoring function by hand.
Copy and microcopy. Every button label, error message, and help text was reviewed and often rewritten. AI-generated copy is competent but bland. Good microcopy has personality.
FAQ
Is 20x possible for every project?
No. CRUD-heavy web applications get the highest multiplier. Systems programming, performance optimization, and novel algorithms see smaller gains (2-5x). The multiplier correlates with how much of the work follows established patterns.
Does the speed advantage hold for larger applications?
Yes, with diminishing returns. As complexity grows, more time goes to architecture decisions and integration testing, where AI's advantage is smaller. A 10,000-line app might see 15x. A 100,000-line app might see 8x.
How do you maintain quality at this speed?
By reviewing every change. The speed comes from generation, not from skipping review. I reviewed every line Claude produced. The review is faster than writing, but it is not optional. See our dev workflow guide for review patterns.
What about technical debt from fast AI-generated code?
Less than you might expect. AI-generated code follows consistent patterns, which makes it easier to refactor later. The bigger debt risk is architecture decisions made too quickly -- take your time on those.
Can a junior developer achieve the same speed?
A junior developer will be slower because review takes longer when you have less experience. But the multiplier is still significant -- maybe 5-10x instead of 20x. The learning benefit is also substantial because you see many implementation patterns quickly.
Explore production-ready AI skills at aiskill.market/browse or submit your own skill to the marketplace.
Sources
- Next.js 15 Documentation - App Router and Server Components reference
- Supabase Documentation - Database, auth, and realtime features
- Anthropic Claude Code Guide - AI-assisted development setup