Core Web Vitals
LCP, INP (replaced FID), CLS — the thresholds, the field-vs-lab distinction, and the playbook for fixing each one. Plus the new Visual Stability Index.
Core Web Vitals are the only performance metrics Google has ever publicly committed to as ranking inputs, and as of March 2024 the trio is LCP, INP, and CLS — INP officially replaced FID. Google measures these in the field via the Chrome User Experience Report (CrUX), not in your lab tests. If your CrUX data is red, your rankings bleed; if your lab data is green but field is red, you have a measurement bug, not a ranking advantage.
Core Web Vitals waterfall
Reorder the head and toggle each resource's loading hint — watch LCP, INP, CLS update.
<head> resources(top → bottom = load order)
- analytics.js~320ms
- app-bundle.js~480ms
- theme.css~180ms
- Inter.woff2~220ms
- hero.jpg~540ms
- gallery-1.jpg~260ms
- gallery-2.jpg~260ms
Network waterfall (simulated)
Lighthouse-style score
Composite of LCP, INP, CLS, FCP weighted equally.
Try clicking Apply best practice: JS gets defer, fonts get preload, the hero image gets preload (and below-fold images get loading="lazy"), CSS inlines. The score should jump above 90.
TL;DR
- LCP under 2.5s, INP under 200ms, CLS under 0.1 — these are the “Good” thresholds at the 75th percentile of CrUX, not averages, not your dev-machine numbers.
- Field beats lab every time. Lighthouse simulates; CrUX measures real Chrome users. When they disagree, fix the field.
- INP is a sitewide signal. Unlike FID’s first-interaction-only sampling, INP captures the worst interaction across the entire visit, which exposes long-running React handlers and bloated event listeners that FID never caught.
The mental model
Core Web Vitals are like a restaurant health inspection. Lighthouse is your own kitchen check on a quiet Tuesday morning; CrUX is the inspector arriving unannounced during Saturday dinner rush with three orders backed up. The grade that goes on the door is the one from the second visit, not the first.
LCP measures how fast the main thing loads — usually a hero image or above-the-fold headline. INP measures how fast the page reacts when you tap, click, or type across the entire visit. CLS measures how much the page shifts under your finger while you’re trying to read or click.
Each metric has three buckets: Good, Needs Improvement, and Poor. Google’s published cutoffs use the 75th percentile of CrUX over a rolling 28-day window. That detail matters: a single bad day on a viral page can drag your monthly score, and a single fast device family in your audience can hide a slow long-tail.
The thing most teams miss: Core Web Vitals are evaluated per page template, not per URL. If your product detail template scores poorly, every product page inherits the signal. Fix the template, fix every URL it spawns.
Deep dive: the 2026 reality
The thresholds Google enforces on the 75th percentile of CrUX:
| Metric | Good | Needs improvement | Poor |
|---|---|---|---|
| LCP (Largest Contentful Paint) | ≤ 2.5s | 2.5s – 4.0s | > 4.0s |
| INP (Interaction to Next Paint) | ≤ 200ms | 200ms – 500ms | > 500ms |
| CLS (Cumulative Layout Shift) | ≤ 0.1 | 0.1 – 0.25 | > 0.25 |
INP replaced FID on March 12, 2024. FID only sampled the first interaction’s input delay; INP samples every interaction’s full latency to the next paint and reports the worst (or 98th percentile if there are 50+ interactions). The result: pages that felt fine under FID frequently fail INP because long React reconciliations, heavy onChange handlers, and synchronous localStorage writes all surface in the new metric.
Google also rolled out the Visual Stability Index (VSI) in late 2025 as an experimental successor to CLS. VSI weights shifts by the proportion of viewport affected and the user’s likely intent at the moment of shift — a 0.05 shift while a user’s thumb is mid-tap counts more than the same shift while idle. CrUX exposes VSI alongside CLS today; Google has not committed to ranking impact yet but Search Advocate Martin Splitt confirmed at Google I/O 2025 that “we’re studying it actively.”
The current AI crawlers — GPTBot, ClaudeBot, PerplexityBot, OAI-SearchBot, and Google-Extended — do not measure Core Web Vitals. They render in headless Chromium with their own budgets and abandon pages that take more than a few seconds to paint. So while CWV is technically a Google ranking input, a fast site is also a prerequisite for AI Overview and AI Mode citations. A page that takes 8 seconds for LCP rarely makes it into Gemini’s grounding pass.
Modern frameworks default to good or bad CWV in predictable ways:
| Framework | LCP default | INP risk | CLS risk |
|---|---|---|---|
| Astro (islands) | Good — minimal JS | Low — selective hydration | Low — static layout |
| Next.js App Router (RSC) | Good if streaming | Medium — large client bundles | Medium — loading.tsx boundaries |
| SvelteKit | Good — small bundle | Low — fine-grained reactivity | Low |
| Remix | Good — server-first | Medium — full hydration | Low |
| Classic Next.js Pages | Mixed — CSR fallbacks | High — large _app.js | Medium |
| Create React App / Vite SPA | Poor — empty shell | High — heavy hydration | High |
The single biggest INP regression source in 2026 is third-party tag managers. GTM containers above 60 KB compressed, especially those firing CMP scripts and analytics on every interaction, are the leading cause of INP > 200ms in the wild — DebugBear’s State of Web Performance 2025 report attributes 38% of INP failures to GTM-loaded scripts.
Visualizing it
flowchart TD
A[Real user opens page in Chrome] --> B[Browser samples LCP, INP, CLS]
B --> C[Chrome posts metrics to CrUX]
C --> D[28-day rolling 75th percentile per origin and per URL group]
D --> E{Pass thresholds?}
E -->|Yes| F[Page Experience signal: positive]
E -->|No| G[Page Experience signal: negative]
F --> H[Ranking input feeds Helpful Content + ranking systems]
G --> H
I[Lighthouse / PSI lab test] -.simulated, advisory only.-> J[Local dev metrics]
J -.does NOT feed.-> H
Bad vs. expert
The bad approach
<!-- Hero image, lazy-loaded, no dimensions, served from origin -->
<img src="/hero.jpg" loading="lazy" alt="Product hero" />
<!-- Render-blocking webfont with no fallback -->
<link rel="stylesheet" href="https://fonts.googleapis.com/css2?family=Inter:wght@400;700&display=block" />
<!-- 320 KB GTM container with CMP and four analytics scripts -->
<script src="https://www.googletagmanager.com/gtm.js?id=GTM-XXXX"></script>
This wrecks all three metrics at once. loading="lazy" on the LCP image delays it by an extra 200–400ms because the lazy-load heuristic triggers after layout. No width/height means the browser allocates zero space and shifts the page once the image arrives, spiking CLS. display=block on the font hides text for up to 3 seconds, making LCP a font-paint wait. The synchronous GTM tag blocks the main thread on every tap and pushes INP past 500ms on mid-range Android.
The expert approach
<!-- LCP image: eager, fetchpriority high, explicit dimensions, AVIF + responsive -->
<img
src="/hero-1280.avif"
srcset="/hero-640.avif 640w, /hero-1280.avif 1280w, /hero-1920.avif 1920w"
sizes="(max-width: 768px) 100vw, 1280px"
width="1280"
height="720"
alt="Product hero"
fetchpriority="high"
decoding="async"
/>
<!-- Self-hosted font, preloaded, swap fallback to avoid invisible text -->
<link rel="preload" href="/fonts/inter-var.woff2" as="font" type="font/woff2" crossorigin />
<style>
@font-face {
font-family: "Inter";
src: url("/fonts/inter-var.woff2") format("woff2-variations");
font-display: swap;
font-weight: 100 900;
}
</style>
<!-- GTM deferred until after LCP, with size-of-impact budget -->
<script>
window.addEventListener("load", () => {
requestIdleCallback(() => {
const s = document.createElement("script");
s.src = "https://www.googletagmanager.com/gtm.js?id=GTM-XXXX";
s.async = true;
document.head.appendChild(s);
}, { timeout: 3000 });
});
</script>
// web-vitals 4.x — capture INP attribution and ship to your analytics
import { onLCP, onINP, onCLS } from "web-vitals/attribution";
const send = (metric) => {
navigator.sendBeacon(
"/_vitals",
JSON.stringify({
name: metric.name,
value: metric.value,
rating: metric.rating,
attribution: metric.attribution,
url: location.pathname,
})
);
};
onLCP(send, { reportAllChanges: false });
onINP(send, { reportAllChanges: false, durationThreshold: 40 });
onCLS(send, { reportAllChanges: false });
The fetchpriority="high" hint tells Chrome to fetch the LCP image before any other resource on the page, typically saving 200–400ms. font-display: swap keeps text visible on the fallback while the webfont loads. Deferring GTM until idle frees the main thread during the interaction window so INP stays under 200ms. The web-vitals library with attribution tells you exactly which element caused each LCP and which event handler caused each INP, so you can fix the actual offender instead of guessing.
Do this today
- Open PageSpeed Insights at
pagespeed.web.devand enter your homepage. Read the “Discover what your real users are experiencing” section first — that’s CrUX field data. Ignore the lab scores until field is green. - In Google Search Console, open Experience → Core Web Vitals. Sort by “Poor URLs” and click into the worst URL group. The page lists the failing metric and a sample URL set.
- Install the web-vitals library (
npm i web-vitals) and wireonLCP,onINP,onCLSfromweb-vitals/attributionto your analytics. Tag every event with the URL pattern, not the URL. - In DebugBear or Treo Site Speed, set up monitoring on the five highest-traffic templates with synthetic + RUM mode. Alert on 75th-percentile regression of more than 100ms.
- For LCP: open Chrome DevTools, go to Performance → Lighthouse → Mobile, run an audit, click the LCP marker in the trace. Add
fetchpriority="high", explicitwidth/height, and AVIF/WebP to the LCP element. - For INP: in DevTools open Performance → Record, then tap the slowest interaction. Look for tasks longer than 50ms in the main thread. Move them off the critical path with
requestIdleCallbackor break them withscheduler.yield(). - For CLS: enable DevTools’ Rendering → Layout Shift Regions overlay and reload the page. Every flash is a shift. Reserve space with
aspect-ratioCSS or explicit dimensions; never inject a banner above existing content. - Audit your Google Tag Manager container in Tag Assistant. Anything firing on
Page Viewthat isn’t strictly required for the first interaction should be moved toWindow LoadedorCustom Eventtriggers. - Validate the fix in CrUX Dashboard (
g.co/chromeuxdash) for your origin. CrUX updates daily with a 28-day rolling window — expect it to take two to three weeks for fixes to show up at the 75th percentile. - Re-check Search Console → Core Web Vitals weekly until the URL group flips from “Poor” to “Good.” Then add the template to a regression suite so the next deploy doesn’t undo it.
Mark complete
Toggle to remember this module as mastered. Saved to your browser only.
More in this part
Part 5: Technical SEO
- 026 Technical SEO Fundamentals 12m
- 027 Site Architecture 20m
- 028 Crawling & Indexing 17m
- 029 robots.txt Deep Dive 15m
- 030 XML Sitemaps 12m
- 031 Canonical Tags 20m
- 032 Meta Robots & X-Robots-Tag 13m
- 033 HTTP Status Codes 15m
- 034 Crawl Budget Management 16m
- 035 JavaScript SEO 26m
- 036 Core Web Vitals You're here 17m
- 037 Site Speed & Performance 19m
- 038 HTTPS & Site Security 12m
- 039 Mobile SEO & Mobile-First Indexing 14m
- 040 Structured Data & Schema Markup 17m
- 041 International SEO (hreflang) 19m
- 042 Pagination 12m
- 043 Faceted Navigation 26m
- 044 Duplicate Content 13m
- 045 Site Migrations 24m