ðŸĢHatchlingPerformanceWeb VitalsOptimization

Core Web Vitals

LCP, INP, CLS — these aren't just metrics. They're the language of performance that Google, your PM, and your interviewers all speak.

Core Web Vitals

Core Web Vitals are Google's user-centric performance metrics. They measure what users actually experience — loading speed, interactivity, and visual stability. They also affect search ranking, which means performance is a product concern, not just an engineering one.

The Three Metrics

Largest Contentful Paint (LCP)

What: Time until the largest visible content element renders. Typically a hero image, heading, or video poster.

Good: < 2.5s | Needs improvement: 2.5-4s | Poor: > 4s

Common causes of poor LCP:

  • Slow server response (TTFB > 800ms)
  • Render-blocking CSS/JS
  • Unoptimized images without width/height attributes
  • Client-side rendering where the LCP element depends on JavaScript

Optimization strategies:

<!-- Preload the LCP image -->
<link rel="preload" as="image" href="/hero.webp" fetchpriority="high" />
 
<!-- Use modern formats with fallback -->
<picture>
  <source srcset="/hero.avif" type="image/avif" />
  <source srcset="/hero.webp" type="image/webp" />
  <img src="/hero.jpg" alt="Hero" width="1200" height="600" fetchpriority="high" />
</picture>
// SSR or static generation for LCP content
// Don't render LCP elements behind client-side data fetches
export async function getServerSideProps() {
  const hero = await getHeroContent();
  return { props: { hero } };
}

Interaction to Next Paint (INP)

What: The latency of the worst interaction during the page visit (replacing FID, which only measured the first interaction).

Good: < 200ms | Needs improvement: 200-500ms | Poor: > 500ms

INP measures the full cycle: input delay + processing time + presentation delay.

Common causes of poor INP:

  • Long tasks on the main thread (>50ms blocks)
  • Heavy re-renders triggered by user interaction
  • Synchronous operations in event handlers

Optimization strategies:

// Bad: blocking the main thread during interaction
button.addEventListener('click', () => {
  const result = expensiveComputation(data); // blocks for 200ms
  renderResult(result);
});
 
// Good: yield to the browser between work chunks
button.addEventListener('click', async () => {
  showLoadingState();
  await scheduler.yield(); // let browser paint the loading state
  const result = expensiveComputation(data);
  renderResult(result);
});
 
// Alternative: break work into chunks
function processInChunks(items, chunkSize = 50) {
  let i = 0;
  function nextChunk() {
    const end = Math.min(i + chunkSize, items.length);
    while (i < end) processItem(items[i++]);
    if (i < items.length) requestIdleCallback(nextChunk);
  }
  nextChunk();
}

Cumulative Layout Shift (CLS)

What: Sum of all unexpected layout shift scores during the page's lifetime.

Good: < 0.1 | Needs improvement: 0.1-0.25 | Poor: > 0.25

Common causes of poor CLS:

  • Images/videos without dimensions
  • Dynamically injected content above the viewport
  • Web fonts causing FOIT/FOUT
  • Ads or embeds without reserved space

Optimization strategies:

<!-- Always set dimensions on media -->
<img src="/photo.jpg" width="800" height="600" alt="Photo" />
 
<!-- Reserve space for dynamic content -->
<div style="min-height: 250px;">
  <!-- Ad loads here later -->
</div>
/* Prevent font swap layout shift */
@font-face {
  font-family: 'CustomFont';
  src: url('/font.woff2') format('woff2');
  font-display: optional; /* no FOUT, no CLS */
}

Measuring Web Vitals

In the Lab (development)

  • Lighthouse — Chrome DevTools, CI integration
  • Chrome DevTools Performance panel — frame-by-frame analysis
  • Web Vitals Extension — real-time overlay in the browser

In the Field (real users)

import { onLCP, onINP, onCLS } from 'web-vitals';
 
onLCP((metric) => sendToAnalytics('LCP', metric));
onINP((metric) => sendToAnalytics('INP', metric));
onCLS((metric) => sendToAnalytics('CLS', metric));

Field data matters more than lab data. A page that scores 100 in Lighthouse but has poor 75th-percentile field metrics is not performant.

RUM vs Synthetic

ApproachWhat it measuresWhen to use
Synthetic (Lighthouse)Controlled environmentCI/CD gates, debugging
RUM (field data)Real user experienceProduction monitoring, business impact

The Performance Budget

Set budgets, not goals:

LCP: < 2.5s at p75
INP: < 200ms at p75
CLS: < 0.1 at p75
Total JS: < 200KB gzipped
Total CSS: < 50KB gzipped

Budget violations should break CI, not just warn.

Interview Signal

Web Vitals questions test whether you optimize based on data or intuition:

  1. Metric specificity — knowing that LCP measures perceived load, not DOMContentLoaded
  2. Root cause analysis — tracing a poor CLS score to font loading or missing image dimensions
  3. Measurement sophistication — distinguishing lab vs field data, knowing that p75 is the target percentile
  4. Business context — connecting performance to conversion rates, SEO ranking, and user retention