All articles
Performance

Core Web Vitals in 2026: What’s Changed

Google’s performance thresholds keep evolving. INP has replaced FID, LCP scoring has tightened, and CLS expectations are stricter than ever. Here is what developers need to know right now.

Share:

Core Web Vitals are not a static specification. Google has continuously refined which metrics matter, what the thresholds are, and how they weight into ranking signals. If you set your CWV strategy in 2022 and never revisited it, your understanding of what constitutes \u201cgood\u201d performance is out of date.

Here is a precise account of where the metrics stand in 2026, what has changed, and which optimisation levers have the highest return.

INP has replaced FID \u2014 and the bar is higher

The most significant structural change in recent years was the replacement of First Input Delay (FID) with Interaction to Next Paint (INP) as a Core Web Vital.

FID was a narrow metric: it measured only the delay before the browser started processing the first user interaction. A page could have a great FID score but feel sluggish throughout a session because long tasks were blocking the main thread during subsequent interactions.

INP measures the full duration of every interaction \u2014 from user gesture (click, tap, keyboard input) to the next visual paint. It samples all interactions during a session and reports the worst one (excluding outliers). This is a far more honest measure of how responsive a page feels.

SPAs are disproportionately affected by INP

Single-page applications built with React, Vue, or Angular tend to have worse INP scores than traditional multi-page sites. State updates, re-renders, and client-side routing all run on the main thread. If your INP is above 500 ms on a React app, look at long tasks in the Performance panel, defer non-critical state updates, and consider concurrent rendering patterns like useTransition.

INP thresholds

  • Good: \u2264 200 ms
  • Needs Improvement: 200\u2013500 ms
  • Poor:> 500 ms

LCP: image scoring has tightened

Largest Contentful Paint measures when the largest visible element in the viewport \u2014 almost always a hero image or above-the-fold text block \u2014 finishes rendering.

The threshold remains 2.5 s for \u201cgood,\u201d but Google has tightened how it attributes LCP to images served through complex loading pipelines. Key changes to be aware of:

  • Images behind fetchpriority="low" or with lazy loading applied to the LCP element are now penalised more aggressively in Lighthouse 12+.
  • Animated images (GIFs, animated WebP) are now assessed against the full animation duration, not just the first frame.
  • Soft navigations in SPAs are now tracked separately \u2014 LCP is measured after each client-side route transition, not just on initial load.

Prioritise your LCP element explicitly

Add fetchpriority="high" to your LCP image and make sure it is never lazy-loaded. Then preconnect to the origin serving it: <link rel="preconnect" href="https://cdn.example.com" />. These two changes alone commonly improve LCP by 400\u2013800 ms on image-heavy pages.

CLS: the layout shift checklist

Cumulative Layout Shift measures visual instability \u2014 how much page content moves unexpectedly during loading. The good threshold remains 0.1, but the ways that accumulate CLS have multiplied as modern web patterns have evolved.

Common culprits in 2026:

  • Dynamic ad slots and third-party embeds that load after the initial paint without reserved space.
  • Web fonts causing FOUT/FOIT. Use font-display: optional or size-adjust in your @font-face declaration to prevent text from reflowing.
  • Images without explicit width and height attributes that collapse to zero height until loaded.
  • Late-injected banners \u2014 cookie consent overlays, notification prompts, and sale banners that push content down after the initial render.
  • CSS animations that affect layout properties (top, left, width, height). Use transform and opacity instead.

Measuring CWV correctly

There is an important distinction between field data and lab data that determines whether your CWV scores affect your rankings.

Field data (also called Real User Monitoring or RUM) is collected from actual visitors using the Chrome User Experience Report (CrUX). This is the data Google uses for ranking. Check it in Google Search Console under \u201cCore Web Vitals\u201d or via the CrUX API.

Lab data comes from synthetic tests run by tools like Lighthouse, PageSpeed Insights, or WebPageTest. Lab scores are reproducible and useful for diagnosing issues and measuring the impact of optimisations, but they do not directly correspond to the CrUX data Google ranks you on.

75th percentile, not average

Google assesses CWV at the 75th percentile of all sessions for a URL. This means 25% of your users can have an experience worse than your \u201cgood\u201d threshold and you will still pass. But it also means a single very slow user segment \u2014 mobile users on 3G, for instance \u2014 can drag your entire score below the threshold even if most users are fine.

The highest-ROI optimisations right now

Based on the current state of the metrics, these are the changes with the highest expected return for most production sites:

  1. Audit your LCP element. Add fetchpriority="high", remove any lazy loading from it, and ensure it is served from a CDN with an appropriate Cache-Control header.
  2. Run a long-tasks audit using Chrome DevTools Performance panel. Any task over 50 ms is a candidate for scheduler.yield() or deferred execution.
  3. Reserve space for every dynamic element \u2014 ads, embeds, images \u2014 with explicit dimensions or aspect-ratio CSS before content loads.
  4. Move cookie/notification banners to a fixed position that does not affect document flow, or inject them before the first paint using SSR.
  5. Use content-visibility: auto on below-fold sections to reduce rendering cost of off-screen content.
#performance#core-web-vitals#seo#lcp#cls#inp

Frequently asked questions

Found this useful?

Share:

Want to audit your own project?

These articles are written by the same engineers who built CodeAva\u2019s audit engine.