Frontend Performance
Measuring and optimizing browser-side speed with the same rigor you apply to backend systems
Learning Objectives
By the end of this module you will be able to:
- Explain Core Web Vitals (LCP, INP, CLS) and articulate why field measurements from real users matter more than Lighthouse lab scores.
- Trace the critical rendering path and identify which resources block first paint.
- Describe the hydration cost problem and compare islands architecture, partial hydration, and React Server Components as solutions.
- Implement image optimization using modern formats, responsive image markup, and correct
fetchpriorityfor LCP elements. - Set up bundle size budgets in CI and use code splitting to enforce them.
- Apply perceived performance techniques (skeleton screens, optimistic UI, View Transitions) and explain the psychological mechanism behind each.
Core Concepts
The Frontend Performance Measurement Framework
Backend engineers are used to p95/p99 latency dashboards. Frontend performance has its own measurement framework built around a different question: not "how fast is the server?" but "what does the user actually experience in the browser, on their device, on their network?"
That framework is Core Web Vitals.
Core Web Vitals are three field-measured browser-side metrics Google uses to evaluate user experience. They are:
- LCP — Largest Contentful Paint: loading performance. How long until the largest image or text block is visible? Good threshold: ≤ 2.5 seconds.
- INP — Interaction to Next Paint: responsiveness. How long does the browser take to visually respond to user input, measured at the 98th percentile? Good threshold: ≤ 200ms. (INP replaced First Input Delay in March 2024.)
- CLS — Cumulative Layout Shift: visual stability. How much do elements unexpectedly jump around during loading? Good threshold: ≤ 0.1.
All three are evaluated at the 75th percentile across real field visits — balancing user experience coverage without punishing rare outliers.
The critical nuance: these metrics must be measured from field data, not lab tools. The Chrome User Experience Report (CrUX) collects this real-user data. Approximately 50% of pages that score 100 in Lighthouse fail field thresholds on real devices. This mirrors the backend problem of benchmark results that look great in staging but collapse under production load.
Sites that clear all three Core Web Vitals thresholds show 24% lower bounce rates, directly linking technical performance to business outcomes.
Why INP Is Different
INP cannot be measured by lab tools at all — it requires real user data by definition. It measures the worst interaction latency (98th percentile) across the entire page visit, capturing sluggishness during complex tasks, not just the first click. This matters because hydration windows, background tasks, and third-party script execution all affect INP in ways a lab test cannot reproduce.
CLS in Detail
CLS uses the formula impact-fraction × distance-fraction aggregated within 5-second session windows. Shifts within 500ms of user input are excluded — only unexpected shifts count. The most common causes are images without declared dimensions, dynamically injected banners, and web font swaps.
The Critical Rendering Path
Before any pixel appears, the browser executes a sequential pipeline:
The important architectural insight: CSS is render-blocking. The browser cannot construct the render tree until both the DOM and CSSOM are complete. Every external stylesheet is a synchronous dependency on first paint.
Interventions at Each Stage
HTML parsing. Scripts without async or defer pause HTML parsing while they download and execute. Use defer when you need DOM-ready execution in order, async for independent scripts like analytics where order does not matter.
CSS. Inline critical above-the-fold CSS directly in <head> to eliminate the HTTP round-trip. Defer non-critical CSS with media="print" or load it asynchronously after first paint.
Late-discovered resources. The browser's preload scanner tries to find resources early, but fonts, hero images, and API-fetched content are often discovered late. Use <link rel="preload"> to fetch them early without blocking parsing. Use sparingly — overuse causes bandwidth competition.
Visual continuity during font loading. font-display: swap prevents invisible text during font download by rendering immediately with a fallback font. Be aware: if fallback and target fonts differ significantly in metrics, this introduces CLS.
Only 15% of mobile pages pass the render-blocking resources audit. The median mobile Total Blocking Time is 1,916ms.
Hydration Costs
Hydration is the process of attaching JavaScript event listeners and framework state to HTML that the server already rendered. The core problem: you are paying to re-execute work the server already did.
Server-Side Rendering gives you a fast First Contentful Paint because HTML arrives ready to display. But the page cannot respond to clicks until the framework has hydrated. This creates an "uncanny valley" — the page looks ready but doesn't respond, which is worse than a blank screen because users blame themselves for clicking "wrong."
The cost scales dramatically on real hardware:
- Browser DevTools on a developer laptop: 50–100ms hydration
- Real mobile device (Moto G4 class, 4G): 500–1000ms hydration
This is the same class of problem as benchmarking a database query on the server machine vs. measuring latency from a distant client. The environment where you measure is not the environment your users are in.
Average React SSR applications have a 4.2-second Time to Interactive on mobile. 53% of users abandon sites that take over 3 seconds to load.
Three Architectural Responses
1. Islands Architecture
The server renders the full page as HTML. Dynamic regions ("islands") are hydrated independently as self-contained widgets. Slow islands do not block fast ones. Static regions ship zero JavaScript.
Astro implements islands as a built-in feature using client directives:
client:load— hydrate immediatelyclient:visible— hydrate when the component enters the viewport (progressive hydration via IntersectionObserver)client:idle— hydrate when the browser is idle (requestIdleCallback)
2. React Server Components (RSC)
RSC eliminates hydration entirely for non-interactive content by rendering those components only on the server and sending HTML. No JavaScript for that component is shipped to the browser. One content site reduced hydration work by ~70% using Server Components for list rendering.
React 18's Suspense also enables selective hydration: high-priority interactive components hydrate first. Wix reported a 20% payload reduction and 40% INP improvement combining selective hydration with Suspense and streaming.
3. Qwik's Resumability
Qwik takes a different approach entirely: it serializes framework state into the HTML so the browser can resume where the server left off without re-running component logic. The initial JavaScript payload is O(1) regardless of application size — it does not grow with component count.
Think of full hydration as a database read-your-writes consistency model: you must fully replay the server's work before the client can proceed. Islands architecture is closer to eventual consistency across independent shards — each island independently reaches interactive state. RSC is more like read-through caching: truly static content never touches client-side execution at all.
Image Optimization
Images are the LCP element on 85% of desktop pages and 76% of mobile pages. This makes image optimization the highest-leverage single intervention available for most content sites.
Modern Formats
WebP and AVIF provide 30–50% smaller file sizes than JPEG/PNG at equivalent visual quality, directly improving LCP. AVIF achieves ~30% better compression than WebP. Browser support is now practical for both: WebP has effectively universal support; AVIF reaches 85%+ of browsers (Chrome, Firefox, Safari 16+, Edge).
The recommended pattern:
<picture>
<source srcset="hero.avif" type="image/avif">
<source srcset="hero.webp" type="image/webp">
<img src="hero.jpg" alt="..." width="800" height="600">
</picture>
This is progressive enhancement: browsers pick the best format they support, falling back to JPEG.
Responsive Images
srcset and sizes let the browser choose the right resolution for the actual viewport and device pixel ratio, preventing mobile users from downloading full desktop-sized images:
<img
srcset="hero-400.webp 400w, hero-800.webp 800w, hero-1600.webp 1600w"
sizes="(max-width: 640px) 100vw, 800px"
src="hero-800.jpg"
alt="..."
width="800"
height="600"
>fetchpriority and LCP
The browser's default resource prioritization may not identify your LCP image as high priority. Explicitly signal it:
<img src="hero.jpg" fetchpriority="high" alt="..." width="800" height="600">
Real-world deployments report improvements from 2.6s to 1.9s LCP — a 4–30% improvement. Use this attribute only on the actual LCP element. Applying it broadly causes bandwidth competition and negates the benefit.
Two Rules Never to Break
-
Always declare
widthandheighton images. Without them, the browser cannot reserve space before the image loads, and the layout shifts when the image arrives. 65% of desktop pages have at least one dimensionless image causing preventable CLS. -
Never apply
loading="lazy"to LCP images. Lazy loading defers loading until the element is near the viewport — which is exactly when the browser should be urgently loading the LCP element, not starting the fetch. 16% of pages incorrectly lazy-load LCP images according to the 2025 Web Almanac.
For below-the-fold images, loading="lazy" is correct and requires zero JavaScript. It works in 95%+ of browsers with safe fallback to eager loading.
Bundle Budgets and Code Splitting
A bundle size budget is a performance constraint enforced at build time — analogous to a memory limit or a time limit on a database query. It converts "we should keep the bundle small" from aspiration into a regression gate.
Lighthouse CI enforces performance budgets in CI/CD pipelines and fails builds when thresholds are exceeded.
Understanding What You're Measuring
Bundle analyzers report three distinct numbers:
- Stat size: pre-minification
- Parsed size: post-minification (what the browser parses)
- Gzipped size: compressed transfer size
Compression reduces the bytes transmitted over the network. It does not reduce parse time — the browser decompresses and parses the full code regardless. A 300kb gzipped bundle is still 1MB+ of JavaScript the browser must parse and compile.
Third-party scripts account for 35+ per page on average and cause 60–70% of performance issues, adding 30% to load time on unmonitored sites. The top 10 scripts average 1.4 seconds of blocking time. Monitor them with the same rigor as first-party code.
Code Splitting Strategies
Webpack supports three approaches:
- Entry Points — manual configuration for multi-page apps
- SplitChunksPlugin — extracts shared code into common chunks
- Dynamic Imports —
import()calls create split points on demand
Start with route-based splitting — it has the highest impact for the lowest complexity. Load only the code for the current route. React Router, Vue Router, and Angular's loadChildren all support this natively.
// React example: lazy-loaded route
const Dashboard = React.lazy(() => import('./Dashboard'))
After routes, apply component-based splitting to large conditional components: heavy editors, chart libraries, rarely-opened modals.
Barrel files are a code splitting antipattern. An index.js that re-exports everything causes Webpack to evaluate all re-exported modules even if only one is imported. Setting "sideEffects": false in package.json lets Webpack eliminate unused barrel exports via tree shaking.
Vite automatically splits code based on dynamic imports using ES Module semantics with no additional configuration. Next.js extends React.lazy with dynamic(), adding SSR disabling and customizable loading states.
Perceived vs. Measured Performance
Users do not experience milliseconds. They experience whether the interface feels responsive. Perceived performance and measured performance often diverge, and you can improve user satisfaction without improving any measured metric.
Skeleton Screens
Skeleton screens improve perceived load time by 20–30% compared to spinners despite identical actual loading times. The mechanism: a skeleton provides concrete spatial structure, reducing the psychological duration of waiting. A spinner provides no information about what is coming or when. This mirrors the UX of a watched pot — when you can see that something is happening and roughly what shape it will take, time passes faster.
Optimistic UI
Optimistic UI updates the interface immediately in response to user actions, before the server confirms. This reduces perceived latency by up to 40%. React 19's useOptimistic() hook makes the pattern mainstream. The tradeoff: you must handle rollback clearly when the server rejects the action.
View Transitions
The View Transitions CSS API provides smoother visual continuity during navigation, becoming mainstream in 2025. Pages feel faster when the transition is animated and maintains context rather than cutting to blank. It works as progressive enhancement with minimal code overhead.
Prefetching
Speculative prefetching via the Speculation Rules API (2025) improves perceived load time by up to 45%. Prerendering — fully rendering the next page in the background — achieves 98.2% LCP reduction on navigation because the page is already rendered when the user arrives.
Smart hover-triggered prefetching uses a 65ms delay to distinguish deliberate hovers from cursor flyovers, preventing unnecessary requests.
A Warning on Fake Progress
Deceptive performance patterns — fake progress bars, animated spinners with no backing state — make slow experiences feel slightly faster but damage user trust. 76% of subscription platforms use dark patterns. The FTC's $2.5 billion Amazon settlement signals serious regulatory consequences. Perceived performance techniques should reflect actual progress, not fabricate it.
The Lab vs. Field Problem
Browser DevTools measure on your machine: a high-end CPU, a fast network, no concurrent background tasks. Real users have 4G on a budget Android phone from 2022.
Synthetic dashboards frequently show green while real users experience poor performance. Real-User Monitoring (RUM) and CrUX field data are the ground truth. Lighthouse and WebPageTest are useful for diagnosing specific issues — not for claiming a site is fast.
The same hydration that takes 50ms in DevTools takes 500–1000ms on a real mobile device. The same render-blocking CSS that costs 20ms in the lab costs 200ms on a slow 3G connection. Always validate against field data.
Worked Example
Diagnosing and Fixing an LCP Regression
Scenario. A content marketing site has an LCP of 4.1 seconds at the 75th percentile in CrUX. Lighthouse shows 2.8 seconds. The team cannot reproduce the problem on their machines.
Step 1: Check what the LCP element is.
Open DevTools > Performance > record a page load. Find the LCP candidate highlighted in the rendering timeline. In this case: a 1.4MB JPEG hero image above the fold.
Step 2: Identify contributing delays.
Use the LCP breakdown in the Performance panel:
- TTFB: 400ms (server)
- Resource load delay: 800ms (image discovered late, competing with render-blocking CSS)
- Resource load time: 1,900ms (1.4MB image over mobile network)
- Render delay: 300ms (main thread busy with JS hydration)
Step 3: Apply targeted fixes.
<!-- Before -->
<img src="/hero.jpg" alt="Hero">
<!-- After -->
<picture>
<source srcset="/hero.avif" type="image/avif">
<source srcset="/hero.webp" type="image/webp">
<img
src="/hero.jpg"
alt="Hero"
width="1200"
height="600"
fetchpriority="high"
>
</picture>
And in <head>:
<link rel="preload" as="image" href="/hero.avif" type="image/avif" fetchpriority="high">
Result: AVIF reduces the image to ~400kb (from 1.4MB). fetchpriority="high" and preload eliminate the late-discovery delay. Width/height prevent CLS during load.
Step 4: Validate in field data, not just lab.
Deploy behind a feature flag to 10% of traffic. Monitor CrUX LCP percentile over 7 days — CrUX aggregates over 28-day windows, so changes take time to appear. Compare RUM data before and after to see actual user impact by device category.
Common Misconceptions
"A Lighthouse 100 score means the site is fast." Approximately 50% of pages scoring 100 in Lighthouse fail field thresholds on real devices. Lighthouse is a diagnostic tool, not a performance certificate. It runs on your machine, on your network, with no real users.
"My bundle is 200kb gzipped — that's small." Compression reduces transfer size, not parse time. A 200kb gzipped bundle may be 700kb+ of JavaScript the browser must parse, compile, and execute. Parse and compile time on budget mobile hardware is often the bottleneck, not network transfer.
"Lazy loading images improves performance." Lazy loading defers loading until elements approach the viewport. For below-the-fold images, this is correct. For the LCP element — the most important image on the page — lazy loading forces the browser to wait until the element is already visible before starting the fetch. 16% of pages make this mistake and directly harm their LCP score.
"SSR means fast time to interactive." SSR improves First Contentful Paint — the user sees content sooner. But Time to Interactive waits for hydration. An SSR app with 1MB of JavaScript may have great FCP and terrible TTI. The hydration cost often exceeds the network cost on real mobile devices.
"DevTools shows 60ms — performance is fine." DevTools measures on your device. Users on mobile devices with slow CPUs experience 10x longer hydration and parsing times. Only field data (CrUX, RUM) reflects what users actually experience.
Active Exercise
Performance Audit Pipeline
Build a repeatable audit workflow on a site you control (or a public site like a major news site):
Part 1: Field vs. lab divergence (15 min)
- Run a Lighthouse audit on a page with a visible hero image. Note the LCP score.
- Look up the same URL in PageSpeed Insights — this surfaces CrUX field data alongside Lighthouse lab data. Compare the field 75th percentile LCP with the Lighthouse result. Document the gap.
Part 2: LCP element identification (15 min)
- Open DevTools > Performance. Record a cold load (disable cache). Find the LCP candidate in the timeline. Answer: what element type is it? Is it lazy-loaded? Does it have explicit dimensions? Is
fetchpriority="high"set?
Part 3: Render-blocking audit (15 min)
- In the same DevTools recording, look for render-blocking resources in the Waterfall. Identify any external CSS loaded before first paint. Are there synchronous scripts? What would change if they used
defer?
Part 4: Bundle analysis (15 min)
- If you have access to a frontend codebase, run
npx webpack-bundle-analyzer(or the Vite equivalent). Identify the three largest modules by parsed size. Are any of them only used on specific routes? Would dynamic import() apply?
Reflection questions:
- How much did the Lighthouse score differ from the field LCP at the 75th percentile?
- Would fixing the LCP element's lazy loading or missing dimensions be a one-line change? Why is it still broken on 16% of pages?
- If the largest bundle module is a component used only on the settings page, what would route-based splitting save for first-page load?
Key Takeaways
- Core Web Vitals (LCP, INP, CLS) are field metrics, not lab metrics. Lighthouse and DevTools are diagnostic tools. CrUX and RUM data tell you what real users experience. The 75th percentile is the evaluation threshold.
- The critical rendering path is a sequential dependency chain. Render-blocking CSS, late-discovered resources, and synchronous scripts all delay first paint. Inline critical CSS, defer non-critical CSS, and use <link rel="preload"> for late-discovered resources.
- Hydration costs scale with device capability, not developer hardware. Average mobile React SSR apps have 4.2s TTI. Islands architecture, partial hydration, and React Server Components reduce hydration work by limiting how much of the DOM needs JavaScript at all.
- Images are the LCP element on 85% of desktop pages. Serve WebP/AVIF with JPEG fallback, declare explicit dimensions, use fetchpriority="high" on LCP images, and never lazy-load above-the-fold images.
- Bundle size budgets should be enforced in CI. Lighthouse CI fails builds when thresholds are exceeded. Start with route-based code splitting, avoid barrel file antipatterns, and measure parsed size — not just gzipped transfer size.