Skip to content

What is Time to First Byte (TTFB)?

Time to First Byte — TTFB — is the time between the browser requesting a URL and the first byte of response arriving. It includes DNS lookup, TCP connection, TLS handshake, server processing, and the first byte of the response hitting the network.

TTFB is not a Core Web Vital, but it's load-bearing for every other metric. If TTFB is slow, everything downstream is slow — LCP cannot paint before HTML arrives.

Thresholds

Google's recommended buckets at the p75 of real users:

  • Good: 800 ms or less
  • Needs improvement: 800 to 1800 ms
  • Poor: over 1800 ms

What's in the TTFB budget

For a request from a typical user in North America to a server in the US:

StepTypical time
DNS lookup20–100ms
TCP connect30–150ms
TLS handshake50–200ms
Server processingdepends
First byte on wire20–80ms

The first three are basically constants unless you move the server. Server processing is the variable — it's where most TTFB pathologies live.

What causes slow TTFB

  • Slow server-side rendering — a Next.js page that makes 5 sequential database queries before rendering
  • Uncached dynamic pages — every request hits the origin
  • Distant origin — server in Ohio, user in Singapore
  • Expensive middleware — auth checks, A/B test allocation, feature flag resolution on every request
  • Bad hosting plan — shared hosts throttle under load
  • Cold starts — serverless functions spinning up on first request after idle

Fixes

Cache aggressively. Static generation (SSG) beats SSR. If you must SSR, use ISR / stale-while-revalidate so most requests hit a cached page.

Use a CDN with edge caching. Cloudflare, Fastly, Vercel Edge Network. Moves the p75 response time down to single-digit milliseconds for cached assets.

Move the origin closer to your users. Multi-region deployment, or at minimum pick a region near your largest audience.

Parallelize server work. If you need multiple data fetches, Promise.all them instead of awaiting sequentially.

Warm serverless functions. Vercel's "Fluid Compute" and similar keep functions warm for frequently-hit paths.

Audit middleware. Every middleware.ts function runs on every request. Remove anything that doesn't truly need to be there.

The CDN test

Hit your page with curl -w "TTFB: %{time_starttransfer}s\n" -o /dev/null -s https://yoursite.com. Do it from the same location repeatedly. If the second request is much faster than the first, caching is working. If every request is slow, TTFB is burning in your stack, not on the network.

By Paulo de Vries · Published

Frequently asked questions

What does TTFB measure?
Time to First Byte — the time from the browser requesting a URL until the first byte of response arrives. It includes DNS lookup, TCP handshake, TLS handshake, and server processing.
What is a good TTFB value?
Under 800 ms is "good." 800–1800 ms is "needs improvement." Over 1800 ms is "poor." TTFB directly limits LCP and FCP.
How do I improve TTFB?
Cache aggressively (SSG > ISR > SSR), use a CDN with edge caching, move your origin closer to users, parallelize server work, and warm serverless functions.

Check a site's vitals

Explore by industry

See how real-world sites in each vertical perform on Core Web Vitals.

Related guides

← All guides