Core Web Vitals shifted from "SEO bonus" to "ranking factor that quietly punishes you" years ago. In 2026, with INP fully replacing FID and Google getting stricter on mobile field data, holding green CWV is hygiene, not advantage. This is the checklist we apply on every Schedars project.

TL;DR

Three metrics matter: LCP under 2.5s (loading), CLS under 0.1 (visual stability), INP under 200ms (interactivity). Field data (real Chrome users via CrUX) ranks you, not lab data. Hit all three on mobile P75 and SEO follows. The fixes are mostly the same checklist for any framework — Astro, Next.js, Rails, plain HTML — and the discipline is in the budget, not the technique.

The 3 metrics that matter

  • LCP (Largest Contentful Paint): how fast the biggest above-the-fold element renders. Threshold: 2.5s P75 mobile.
  • CLS (Cumulative Layout Shift): how much visible content jumps around during load. Threshold: 0.1 P75 mobile.
  • INP (Interaction to Next Paint): how responsive the page feels when users tap/click/type. Threshold: 200ms P75 mobile.

TTFB and FCP are still measured but no longer ranking factors. Don’t optimize them at the expense of LCP/CLS/INP — they correlate but you’re paid for the three above.

LCP checklist

  • Identify the LCP element (Lighthouse, PageSpeed Insights, or Web Vitals Chrome extension)
  • If LCP is an image: serve in WebP/AVIF, set explicit width/height attributes, use fetchpriority="high"
  • Preload the LCP image: <link rel="preload" as="image" href="...">
  • Self-host fonts; don’t use Google Fonts CDN. font-display: swap; preload only the font used in LCP
  • No render-blocking JS in <head>; defer or async everything that isn’t critical
  • CSS critical path: inline above-the-fold CSS, async-load the rest
  • Reduce TTFB: ensure server responds <600ms (CDN caching, edge rendering, lighter SSR)
  • Avoid client-side framework hydration before LCP — hydrate after first paint where possible
  • Image dimensions: don’t serve a 4000px image into a 800px slot — bandwidth wasted hurts LCP
  • On Astro: use <Image /> from astro:assets; on Next.js: use next/image with priority on LCP

CLS checklist

  • Always specify width and height on <img> and <video> — browsers reserve space
  • Reserve space for ads, embeds, iframes — fixed min-height containers
  • Avoid font swap layout shifts: preload critical fonts, use size-adjust to match metrics
  • Don’t inject content above existing content unless triggered by user action
  • Animations: use transform and opacity only, never properties that trigger layout
  • CSS aspect-ratio for media containers: "aspect-ratio: 16 / 9"
  • Sticky elements (cookie banner, headers): position from the start, not after JS loads
  • Skeleton screens for dynamic content blocks

INP checklist

  • Audit long tasks in the Performance panel — anything over 50ms blocks INP
  • Break up long synchronous JS: yield to main thread with scheduler.yield() or setTimeout(0)
  • Defer non-critical JS execution: use IntersectionObserver, idleCallback, web workers
  • Avoid React state updates in event handlers that trigger expensive re-renders without memoization
  • Use React Server Components / Astro islands to ship less client JS in the first place
  • Audit third-party scripts (analytics, chat widgets, ad scripts) — they’re the #1 INP killer in field data
  • Async-load heavy libraries (charts, maps, video players) only when needed
  • On Next.js: prefer Server Components for data fetching, Client Components only for interactivity

How to measure (lab vs field)

Lab data is what Lighthouse and PageSpeed Insights report — single synthetic run on a controlled environment. Useful for detecting regressions in CI. Field data is what real Chrome users experience, aggregated by Google in CrUX (Chrome User Experience Report) and surfaced in PageSpeed Insights and Search Console. Field data is what ranks you.

Lab and field can disagree. A site that scores 95 in Lighthouse can have 60% LCP failures in field data because real users have slow phones, crowded networks, and 30 tabs open. Optimize against field data via the Web Vitals JS library reporting to your analytics.

What we automate at Schedars

  • Lighthouse CI on every PR, blocking merge below thresholds (LCP 2.5, CLS 0.1, INP 200)
  • Web Vitals JS library reporting to PostHog, alerting if P75 mobile crosses thresholds
  • size-limit on JS bundles per route — bundle bloat is caught before merge
  • Visual regression testing via Playwright on top-5 pages
  • Real Device Lighthouse runs (via WebPageTest moto-G6 throttling) for mid-tier mobile profile

Bottom line

Holding green CWV in 2026 is mostly discipline: ship a budget, automate the check, and address regressions same-week instead of letting them stack up. The techniques are well-known — the difference is whether the team enforces them on every PR or just before launch.

Need a CWV audit on your existing site or a perf budget for a new one? Tell us the URL — we’ll send back a prioritized fix list.