Methodology
This page documents exactly where the numbers on Web Vitals Explorer come from, how they're calculated, how often they refresh, and what they do and don't represent. If you're citing data from this site in a report or talk, this is the page to link.
Data source
Every Core Web Vitals metric on this site comes from the Chrome User Experience Report (CrUX) — a public dataset published and maintained by Google. CrUX aggregates real-user measurements from Chrome browsers worldwide and reports them at the origin level on a 28-day rolling window.
We do not perform our own measurements. We do not use Lighthouse, run synthetic tests, or scrape pages. Every LCP, INP, CLS, FCP, and TTFB number on this site is a number Google published — we just make it easier to look up.
What "field data" means
Field data is the experience of real users on real devices and real networks. It's the opposite of lab data, which is what Lighthouse or WebPageTest produce by simulating a load on a controlled machine.
Field data is what Google's search algorithm actually uses for page-experience signals. A site can score 100 in Lighthouse and still rank poorly if its real users experience slow LCP on flaky mobile networks. CrUX is the source of truth.
How the API call works
When you search a domain, we send a single POST request to chromeuxreport.googleapis.com/v1/records:queryRecord with the canonical origin (always https://, no trailing slash, no path). We request three form factors in parallel:
- Mobile (
PHONE) — what Google uses for search ranking - Desktop (
DESKTOP) — the desktop view - All form factors (
ALL_FORM_FACTORS) — combined; the default lookup view
The result is the 75th-percentile (p75) value for each metric, plus the full histogram. We store p75 and the collection-period window. The raw histogram is discarded — we don't need it for our display.
What we store
For each (domain × form factor) lookup, we cache a snapshot in our Postgres database with: lcp_p75, inp_p75, cls_p75, fcp_p75, ttfb_p75, collection_period_start, collection_period_end, and the timestamp we fetched it. We keep up to 12 snapshots per (domain × form factor) for the 30-day trend chart. Older snapshots are rolled off.
We do not store any user data, IP, browser identifier, or session information. The only data we keep is publicly available domain-level aggregates already published by Google.
Refresh cadence
CrUX itself updates monthly — typically the second Tuesday of each month. Our ISR (incremental static regeneration) cache is set to 30 days to match that cadence. In practice, a page you visit gets one fresh CrUX fetch per CrUX update, not more.
On a busy page, the first request after the 30-day window triggers a regeneration in the background; subsequent requests serve the new data immediately. You will never see stale data older than ~30 days plus propagation delay.
Why some domains return "No data"
CrUX requires enough Chrome traffic to anonymize. Domains below the threshold return HTTP 404 with no body — that's Google's way of saying "not enough data to publish." We render a friendly empty state for these pages, and we mark them noindexso Google doesn't treat them as thin content.
If a domain you care about isn't showing up, it likely doesn't have enough Chrome users yet. Bigger sites almost always have data.
Rating thresholds
We use Google's published thresholds verbatim. We do not invent custom "scores" or weighted composites:
- LCP:good < 2.5s · needs improvement 2.5–4.0s · poor > 4.0s
- INP:good < 200ms · needs improvement 200–500ms · poor > 500ms
- CLS:good < 0.1 · needs improvement 0.1–0.25 · poor > 0.25
- FCP:good < 1.8s · needs improvement 1.8–3.0s · poor > 3.0s
Industry hubs & comparisons
Our industry pages (SaaS, E-commerce, Finance, News, Developer Tools, AI) group a curated set of well-known domains for each industry and rank them by the relevant metric. The domain selection is editorial — we picked sites that are widely recognized in each industry, not sponsored or paid placements. We'll expand each list over time and the seed file is in our public GitLab repository.
Limitations
- Origin-level only:CrUX reports per-origin aggregates. We don't expose per-URL data because CrUX doesn't publish it for most URLs.
- Chrome only: CrUX captures Chrome users (including Chromium-based browsers like Edge). Safari and Firefox users are not represented.
- 28-day rolling:a single bad day won't show up; a sustained regression takes ~14–28 days to fully reflect.
- p75, not median: the value we show is the experience of the user at the 75th percentile (3rd-worst out of 4). The fastest quarter of users see better, the slowest quarter see worse.
Corrections
If you find a number on this site that doesn't match what CrUX directly reports for the same origin and form factor, please tell us. We'll investigate within 24 hours. Likely causes: cache lag (we're ~30 days behind CrUX's latest publish), or origin normalization (we normalize http:// to https:// and strip www.only when the user's input includes it).
License & reuse
CrUX itself is published by Google under the Creative Commons Attribution 4.0 license. Our presentation layer (this website's text, layout, and code) is also free to cite with attribution. Link to webvitalstool.comand you're good.
Page last reviewed: . Methodology changes are noted here when they happen.