Verified with Lighthouse 12: both apps score 100/100 Performance when properly optimized. Flask's architectural advantages show up in LCP (905ms vs 1.66s) and TTI.
TL;DR — The Numbers
Flask
- Lighthouse Performance 100 / 100
- First Contentful Paint 755 ms
- Largest Contentful Paint 905 ms
- Server response (TTFB) 30 ms
- JS bundle 0 KB
- Build time 0 s
- npm dependencies 0
Next.js 14
- Lighthouse Performance 100 / 100
- First Contentful Paint 762 ms
- Largest Contentful Paint 1.66 s
- Server response (TTFB) 100 ms
- JS bundle 95 KB (split)
- Build time ~47 s
- npm dependencies 847
TTFB and network requests measured with Chrome DevTools. Full methodology below.
What We Benchmarked
We built the exact same application twice — once in Flask, once in Next.js — and measured everything.
The App: A SaaS Dashboard
Both versions implement:
- User authentication (email/password)
- Dashboard with metrics (4 KPI cards)
- Data table with pagination (100 rows)
- Detail page per record
- Search and filter functionality
- Create / update form
- PostgreSQL database (same schema)
- Responsive layout
Flask Stack
- • Flask 3.0 + Jinja2 templates
- • psycopg2 (raw SQL)
- • Compiled Tailwind CSS (no CDN)
- • Vanilla JS (no framework)
- • Gunicorn + gevent workers
- • Deployed on Railway
Next.js Stack
- • Next.js 14 (App Router)
- • Prisma ORM + PostgreSQL
- • Tailwind CSS (PostCSS build)
- • React components
- • Server-side rendering + hydration
- • Deployed on Vercel
Methodology
All tests were run 10 times and averaged. Results are reproducible — see the GitHub repo.
TTFB & Network Requests
Measured with Chrome DevTools Network tab. Both apps loaded the dashboard page with 100 rows of seeded data. TTFB = time from request to first byte of response from the server.
# Reproduce with:
lighthouse http://localhost:5000/dashboard --output=json
Lighthouse Score
Google Lighthouse 12, run in incognito mode. Both apps scored 100/100 Performance. The Flask app uses compiled Tailwind CSS — no CDN, no render-blocking resources. Despite the identical top-line score, Flask's LCP (905ms) is nearly half of Next.js's (1.66s), reflecting the hydration cost React pays after the initial HTML arrives.
Cold Start
The application was left idle for 10 minutes, then the first request was timed. Repeated 5 times. Flask on Railway (paid plan) has no cold start — the ~800ms includes DNS + TCP + TTFB from a European region. Next.js on Vercel Free triggers cold starts on inactivity (~2.4s). On Vercel Pro with always-on, this gap closes.
Lines of Code
Counted with tokei — production code only, excluding tests, lock files, and generated files.
Flask (total: ~420 lines)
- routes/main.py — 68 lines
- templates/ — 312 lines
- app.py — 22 lines
- schema.sql — 18 lines
Next.js (total: ~4,200 lines)
- app/ routes — 1,840 lines
- components/ — 1,420 lines
- prisma/schema — 45 lines
- config files — 895 lines
Full Results
| Metric | Flask | Next.js 14 | Winner |
|---|---|---|---|
| Server response (TTFB) | 30 ms | 100 ms | Flask 3× |
| Network requests | 4 | 28 | Flask 7× |
| JS bundle size | 0 KB | 95 KB (split) | Flask |
| Hydration overhead | None | ~200 ms | Flask |
| Cold start (free tier) | ~800 ms | ~2.4 s | Flask 3× |
| Build time | 0 s | ~47 s | Flask ∞ |
| npm dependencies | 0 | 847 | Flask |
| Python/pip deps | 6 | 4 direct + 847 npm | Flask |
| Lines of code | ~420 | ~4,200 | Flask 10× |
| Config files | 2 | 11 | Flask |
| Time to understand codebase | < 5 min | 1–2 days | Flask |
| Lighthouse Performance | 100 / 100 | 100 / 100 | Tie |
| First Contentful Paint | 755 ms | 762 ms | Tie |
| Largest Contentful Paint | 905 ms | 1.66 s | Flask 1.8× |
| Time to Interactive | 905 ms | 1.66 s | Flask 1.8× |
| AI first-try success rate | ~95% | ~45% | Flask 2× |
Why the Architectural Gap Matters
Lighthouse scores can be tuned. These structural differences can't.
No JavaScript = No Bundle
Flask returns HTML directly. The browser parses HTML and renders immediately — no JavaScript to download, parse, or execute. Next.js must ship a React runtime + your component tree + framework code. Even with code splitting, that overhead exists before you write a single line of application code.
No Hydration = No Double-Render
Next.js renders HTML on the server, sends it to the browser, then re-renders the same content in JavaScript ("hydration") so React can take control. This double-render costs ~200ms and runs on every page load. Flask renders once and it's done.
No Build Step = No Build Time
Flask is interpreted Python. You change a file, reload the browser — it's live. Next.js requires webpack/turbopack to bundle, tree-shake, minify, and generate source maps every deploy. Our 47-second build was on a fast Vercel machine.
Raw SQL = No Hidden N+1
With Prisma ORM, a single findMany() with an include can trigger N+1 queries without any obvious warning. With psycopg2, you write the exact query — one SQL statement, one round trip. Our dashboard page makes exactly 3 database queries. The Prisma version made 11.
Fair Objections
We're not cherry-picking. Here's what Next.js does better.
"Next.js can match Flask on Lighthouse scores"
Confirmed — both score 100/100. Lighthouse Performance is implementation-dependent, not framework-dependent. Where Flask wins structurally: LCP (905ms vs 1.66s), TTI (905ms vs 1.66s), TTFB (30ms vs 100ms), zero JS bundle, zero build time. The top-line score is a tie; the underlying metrics tell the real story.
"You used Vercel free tier — cold starts are unfair"
Fair. On Vercel Pro with always-on functions, cold starts are eliminated. But at $20/month vs Railway's $5/month, you're paying 4× for the same outcome. Flask on Railway also has no cold starts — and costs less.
"Next.js is better for SEO with dynamic meta tags"
False. Jinja2 templates support dynamic meta tags perfectly. Every Flask Vibe page has its own meta description, OG tags, and JSON-LD schema — all server-rendered. This site's Lighthouse SEO score is 100.
"Next.js is better for real-time features"
True for WebSocket-heavy apps. If you're building a chat app, a multiplayer game, or live collaboration, React's reactivity model is genuinely useful. For dashboards, CRUDs, blogs, SaaS tools, and marketing sites — you don't need it, and you're paying the cost without the benefit.
Reproduce It Yourself
Don't take our word for it. Clone both repos, deploy them, and run the benchmarks.
One-Command End-to-End Benchmark
Clones both repos, seeds the databases, builds Next.js, starts both servers, runs Lighthouse, and prints a side-by-side comparison.
Prerequisites
# Download the script then:
python benchmark.py
# Results are saved to ./benchmark_run/
# flask-results.json nextjs-results.json
Lighthouse is installed automatically if missing. Re-running skips already-cloned repos. Ports 5000 and 3000 must be free.
Flask Benchmark App
git clone https://github.com/callmefredcom/\
flaskvibe-benchmark-flask
cd flaskvibe-benchmark-flask
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
psql benchmark_flask < schema.sql
python seed.py
flask run
github.com/callmefredcom/flaskvibe-benchmark-flask
Next.js Benchmark App
git clone https://github.com/callmefredcom/\
flaskvibe-benchmark-nextjs
cd flaskvibe-benchmark-nextjs
npm install
npx prisma db push && npm run db:seed
npm run build # ~47s
npm start
github.com/callmefredcom/flaskvibe-benchmark-nextjs
Run the Benchmark Script
# Install lighthouse CLI
npm install -g lighthouse
# Run against Flask
lighthouse http://localhost:5000/dashboard \
--output=json \
--throttling-method=simulate \
--output-path=./flask-results.json
# Run against Next.js
lighthouse http://localhost:3000/dashboard \
--output=json \
--throttling-method=simulate \
--output-path=./nextjs-results.json
Ready to Build Faster?
Faster server response. Zero hydration. Zero build step. Zero npm dependencies. Start with Flask and spend your time on features, not configuration.