Performance Optimization Guide

Introduction

Every developer I have worked with cares about performance in principle. In practice, performance is the thing that gets addressed after the feature is shipped, after the deadline has passed, after the Lighthouse score surfaces in a client review and suddenly everyone is looking at the number wondering how it got so low.

The reason performance degrades silently is that it is not a single decision — it is the cumulative result of hundreds of small decisions made throughout the development process, each of which looked fine in isolation. A large bundle here. An unoptimized image there. An animation that triggers layout recalculation. A change detection cycle that runs when it did not need to. A render-blocking script that delays the first meaningful paint by three hundred milliseconds.

None of these feel catastrophic in the moment. Together, they produce an application that users experience as slow — and that a Lighthouse audit surfaces as a score in the fifties or sixties when you expected something much better.

I want to reframe how performance is approached, because the framing matters. Performance is not a measurement you optimize toward at the end of a project. It is a set of habits you build into the development process from the start — habits that operate at every layer of the stack, beginning with HTML and CSS, long before the Angular or React framework enters the picture.

This is how those habits work, what the techniques are at each layer, and how to build applications that achieve and sustain a Lighthouse score that reflects the quality of what you shipped.

The Mental Model: Performance Is Layered

Before any technique, the right mental model.

Performance problems in frontend applications are almost always described in framework terms — “the React re-renders are too frequent,” “Angular change detection is too slow,” “the bundle is too large.” These descriptions are not wrong, but they are incomplete. They describe symptoms at the framework layer without addressing the foundation beneath it.

The performance stack looks like this, from bottom to top:

Network         — how assets are requested and delivered
HTML            — document structure and parse cost
CSS             — paint cost, layout triggers, animation performance
Images          — format, size, loading strategy
JavaScript      — parse cost, execution cost, bundle size
Framework       — Angular change detection, React reconciliation
Application     — state management, data fetching patterns

Optimizations at lower layers have a larger impact than optimizations at higher layers. A 300ms reduction in render-blocking CSS has a bigger effect on First Contentful Paint than a 300ms reduction in Angular’s change detection cycle, because the CSS blocks the browser from rendering anything at all while the change detection only affects incremental updates.

Most developers work top-down — they reach for framework optimizations first because that is the layer they are most familiar with. The highest-leverage work is bottom-up.

Layer One: HTML — The Foundation of Fast

HTML is parsed synchronously. Every element the browser encounters in the <head> before it reaches the page content is potentially a delay. Every byte of unnecessary HTML is a byte the browser has to parse before it can render.

Document structure matters

<!-- ❌ Render-blocking resources in the head — browser stops parsing until loaded -->
<head>
  <script src="/bundle.js"></script>
  <!-- blocks HTML parsing entirely -->
  <link rel="stylesheet" href="/styles.css" />
  <!-- blocks rendering -->
  <script src="/analytics.js"></script>
  <!-- another block -->
</head>

<!-- ✅ Defer non-critical JavaScript, preload critical resources -->
<head>
  <!-- Preload the most critical resource — browser fetches it as early as possible -->
  <link rel="preload" href="/fonts/inter.woff2" as="font" crossorigin />

  <!-- Critical CSS inline — no network request, no render block -->
  <style>
    /* Only the styles needed for above-the-fold content */
    body {
      margin: 0;
      font-family: 'Inter', sans-serif;
    }
    .hero {
      min-height: 100vh;
      display: flex;
      align-items: center;
    }
  </style>

  <!-- Non-critical CSS loaded asynchronously -->
  <link
    rel="preload"
    href="/styles.css"
    as="style"
    onload="this.onload=null;this.rel='stylesheet'"
  />
  <noscript><link rel="stylesheet" href="/styles.css" /></noscript>
</head>

<body>
  <!-- Content -->

  <!-- JavaScript deferred — executes after HTML is fully parsed -->
  <script src="/bundle.js" defer></script>

  <!-- Third-party scripts that do not need to block anything -->
  <script src="/analytics.js" async></script>
</body>

Resource hints — telling the browser what is coming

<!-- dns-prefetch — resolve the DNS for a domain before you need it -->
<link rel="dns-prefetch" href="//api.company.com" />

<!-- preconnect — establish connection (DNS + TCP + TLS) early -->
<link rel="preconnect" href="https://fonts.googleapis.com" />
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />

<!-- preload — fetch a specific resource you know you will need soon -->
<link rel="preload" href="/hero-image.webp" as="image" />
<link rel="preload" href="/inter-var.woff2" as="font" crossorigin />

<!-- prefetch — fetch a resource for a future navigation at low priority -->
<link rel="prefetch" href="/dashboard-chunk.js" />

The difference between preload and prefetch matters: preload is for resources needed in the current navigation. prefetch is for resources needed in the next navigation. Using preload for everything is counterproductive — you are increasing network contention for resources that may not be needed yet.

Semantic HTML and rendering cost

The choice of HTML element affects more than accessibility — it affects rendering cost. The browser applies default styles to every element. Custom elements built from <div> require more CSS to achieve the same visual result than semantic elements that carry appropriate defaults. More CSS means more style recalculation. More style recalculation means slower rendering.

Use the right element and the browser does more of the work for you — not just for accessibility but for performance.

Layer Two: CSS — Where Performance Is Won and Lost

CSS has a larger impact on perceived performance than most developers realise, and the mistakes that kill CSS performance are made at the property level — in which properties are animated, how selectors are structured, and how much layout work the browser is asked to do.

The rendering pipeline — the most important performance concept in CSS

When the browser renders a page, it goes through a pipeline:

JavaScript → Style → Layout → Paint → Composite

Not every CSS change triggers the full pipeline. Some properties only trigger compositing — the cheapest step. Some trigger paint. Some trigger layout — the most expensive, because layout changes can cascade across the entire document.

/* Properties that trigger LAYOUT (most expensive — avoid animating these) */
/* width, height, margin, padding, top, left, right, bottom */
/* font-size, line-height, border-width, display, position */

/* ❌ This animation triggers layout on every frame — janky on any device */
@keyframes slide-in-bad {
  from {
    left: -100px;
  }
  to {
    left: 0;
  }
}

/* Properties that trigger PAINT only */
/* color, background-color, box-shadow, border-color */

/* Properties that trigger COMPOSITE only (cheapest — prefer these) */
/* transform, opacity */

/* ✅ This animation only triggers compositing — smooth on every device */
@keyframes slide-in-good {
  from {
    transform: translateX(-100px);
  }
  to {
    transform: translateX(0);
  }
}

/* ✅ Fade in — opacity only triggers compositing */
@keyframes fade-in {
  from {
    opacity: 0;
  }
  to {
    opacity: 1;
  }
}

The rule: animate only transform and opacity. These properties run on the GPU compositor thread, completely separate from the main JavaScript thread. Even if your main thread is busy, these animations run smoothly.

will-change — tell the browser what is about to happen

/* Tells the browser to promote this element to its own compositor layer
   before the animation begins — eliminates the setup cost at animation start */
.modal {
  will-change: transform, opacity;
}

/* ❌ Do not apply will-change to everything — it consumes GPU memory */
/* Use it only on elements that will actually animate */
* {
  will-change: transform;
} /* this is harmful */

/* ✅ Apply it just before the animation is needed */
.card:hover {
  will-change: transform;
}

Content visibility — the most underused CSS performance property

content-visibility: auto tells the browser to skip rendering for off-screen content entirely. For long pages with complex below-the-fold content — dashboards, data tables, long lists — this can reduce initial render time dramatically.

/* Each section below the fold is not rendered until it is near the viewport */
.dashboard-section {
  content-visibility: auto;
  /* Tell the browser how much space this section takes
     so the scroll bar does not jump as content renders */
  contain-intrinsic-size: 0 500px;
}

Critical CSS — what it is and why it matters

Critical CSS is the CSS needed to render the above-the-fold content. If it is in an external stylesheet, the browser has to download the stylesheet before rendering anything — a full network round trip before the user sees anything at all.

The solution is to inline the critical CSS in a <style> tag in the <head> and load the full stylesheet asynchronously. Tools like critical, Penthouse, or build pipeline integrations extract critical CSS automatically.

<!-- Inline critical CSS — zero network cost, no render block -->
<style>
  /* Extracted critical CSS — only what is visible above the fold */
  :root {
    --color-primary: #e87722;
  }
  body {
    margin: 0;
    font-family: system-ui, sans-serif;
    background: #fff;
  }
  .header {
    height: 64px;
    background: var(--color-primary);
  }
  .hero {
    padding: 4rem 2rem;
  }
</style>

<!-- Full stylesheet loaded after initial render -->
<link
  rel="preload"
  href="/styles.css"
  as="style"
  onload="this.onload=null;this.rel='stylesheet'"
/>

Layer Three: Images — The Largest Opportunity on Most Pages

Images are typically the largest contributor to page weight and the most direct lever on Largest Contentful Paint. The optimizations here are high-impact and largely mechanical.

Format selection

<!-- ❌ JPEG for a photo — reasonable but not optimal -->
<img src="terminal-dashboard.jpg" alt="Terminal dashboard" />

<!-- ✅ WebP with JPEG fallback — 25-35% smaller than JPEG at equivalent quality -->
<picture>
  <source srcset="terminal-dashboard.webp" type="image/webp" />
  <img src="terminal-dashboard.jpg" alt="Terminal dashboard" />
</picture>

<!-- ✅ AVIF with WebP and JPEG fallback — 50% smaller than JPEG, excellent quality -->
<picture>
  <source srcset="terminal-dashboard.avif" type="image/avif" />
  <source srcset="terminal-dashboard.webp" type="image/webp" />
  <img src="terminal-dashboard.jpg" alt="Terminal dashboard" />
</picture>

Lazy loading and size hints

<!-- loading="lazy" — browser defers loading until image approaches viewport -->
<!-- This is the single highest-impact image attribute for performance -->
<img
  src="terminal-card.webp"
  alt="Terminal ATM-001 status"
  loading="lazy"
  width="400"
  height="300"  <!-- Always specify dimensions prevents layout shift (CLS) -->
>

<!-- loading="eager" for above-the-fold images — do not lazy load the LCP image -->
<img
  src="hero-dashboard.webp"
  alt="Fleet monitoring dashboard"
  loading="eager"
  fetchpriority="high"  <!-- Tell the browser this is the priority image -->
  width="1200"
  height="600"
>

Responsive images

<!-- srcset and sizes — serve the right size image for the right viewport -->
<img
  srcset="
    dashboard-400.webp   400w,
    dashboard-800.webp   800w,
    dashboard-1200.webp 1200w
  "
  sizes="
    (max-width: 600px) 100vw,
    (max-width: 1200px) 80vw,
    1200px
  "
  src="dashboard-800.webp"
  alt="Monitoring dashboard"
  loading="lazy"
  width="800"
  height="450"
/>

Layer Four: JavaScript — Bundle Size and Parse Cost

JavaScript is the most expensive resource per byte — it has to be downloaded, parsed, and executed before it can do anything. A 200kb JavaScript file and a 200kb image have very different costs: the image is decoded once, the JavaScript is parsed and compiled on every load.

Bundle analysis first

Before optimizing a bundle, understand what is in it:

# Angular — analyze the bundle
ng build --stats-json
npx webpack-bundle-analyzer dist/app/stats.json

# React (Create React App)
npm run build
npx source-map-explorer 'build/static/js/*.js'

# What to look for:
# — Dependencies you are importing entirely but using partially (lodash, moment)
# — Duplicate dependencies at different versions
# — Large libraries with smaller alternatives (moment → date-fns, lodash → native)
# — Development code that leaked into the production bundle

Tree shaking — import only what you use

// ❌ Imports the entire lodash library — ~70kb
import _ from 'lodash';
const result = _.groupBy(tickets, 'status');

// ✅ Imports only groupBy — ~2kb
import groupBy from 'lodash/groupBy';
const result = groupBy(tickets, 'status');

// ✅ Even better — native JavaScript, zero bundle cost
const result = tickets.reduce(
  (acc, ticket) => {
    (acc[ticket.status] ??= []).push(ticket);
    return acc;
  },
  {} as Record<string, Ticket[]>
);

// ❌ Imports all of date-fns
import { format, parseISO, addDays } from 'date-fns';

// ✅ Same import — but verify your bundler tree-shakes named exports
// date-fns is designed for tree-shaking, this is fine

Code splitting — load only what the user needs now

// Angular — lazy loaded routes
// Each feature module loads only when the user navigates to that route
const routes: Routes = [
  {
    path: 'tickets',
    loadChildren: () =>
      import('./features/tickets/tickets.module')
        .then(m => m.TicketsModule),
  },
  {
    path: 'monitoring',
    loadChildren: () =>
      import('./features/monitoring/monitoring.module')
        .then(m => m.MonitoringModule),
  },
];

// React — React.lazy for route-level code splitting
const TicketsPage = React.lazy(() => import('./pages/TicketsPage'));
const MonitoringPage = React.lazy(() => import('./pages/MonitoringPage'));

function App() {
  return (
    <Suspense fallback={<PageLoader />}>
      <Routes>
        <Route path="/tickets" element={<TicketsPage />} />
        <Route path="/monitoring" element={<MonitoringPage />} />
      </Routes>
    </Suspense>
  );
}

Layer Five: Angular Performance — Framework-Specific Techniques

Once the foundation layers are solid, the framework-level optimizations become meaningful. Without the foundation, they produce marginal improvements on top of significant problems.

OnPush change detection — the single most impactful Angular optimization

Angular’s default change detection checks every component on every event — mouse moves, keyboard presses, timer ticks. With OnPush, a component only checks when its inputs change by reference.

// ❌ Default change detection — checks this component on every single event
// On a dashboard with 50 terminal cards, that is 50 checks per mouse move
@Component({
  selector: 'app-terminal-card',
  template: `<div>{{ terminal.id }} — {{ terminal.status }}</div>`,
})
export class TerminalCardComponent {
  @Input() terminal: Terminal;
}

// ✅ OnPush — only checks when terminal input reference changes
// 50 terminal cards + frequent WebSocket updates = dramatically less work
@Component({
  selector: 'app-terminal-card',
  template: `<div>{{ terminal.id }} — {{ terminal.status }}</div>`,
  changeDetection: ChangeDetectionStrategy.OnPush,
})
export class TerminalCardComponent {
  @Input() terminal: Terminal; // must be immutable — new object on change
}

OnPush requires immutable data. You cannot mutate the terminal object and expect the component to update — you must return a new object. This is not a limitation, it is the correct pattern:

// ❌ Mutates — OnPush component will NOT update
this.terminal.status = 'offline';

// ✅ New reference — OnPush component WILL update
this.terminal = { ...this.terminal, status: 'offline' };

// ✅ In NgRx — reducers produce new state, OnPush works perfectly
function terminalReducer(state, action) {
  case 'STATUS_UPDATE':
    return state.map(t =>
      t.id === action.id ? { ...t, status: action.status } : t
    );
}

trackBy in ngFor — preventing unnecessary DOM destruction

Without trackBy, Angular destroys and recreates every DOM node in a list when the array reference changes. With trackBy, Angular identifies which items changed and only updates those.

// ❌ Without trackBy — Angular recreates ALL terminal elements when array updates
// On a WebSocket-driven dashboard updating every second, this is catastrophic
@Component({
  template: `
    <app-terminal-card
      *ngFor="let terminal of terminals$ | async"
      [terminal]="terminal"
    >
    </app-terminal-card>
  `,
})
// ✅ With trackBy — Angular only updates changed elements
@Component({
  template: `
    <app-terminal-card
      *ngFor="let terminal of terminals$ | async; trackBy: trackByTerminalId"
      [terminal]="terminal"
    >
    </app-terminal-card>
  `,
})
export class TerminalListComponent {
  trackByTerminalId(index: number, terminal: Terminal): string {
    return terminal.id; // stable identity — Angular uses this to match elements
  }
}

Angular Signals — the modern change detection approach

Angular 17+ introduced signals as a fine-grained reactivity primitive that bypasses zone.js entirely for components that use them. Components using signals only re-render when the specific signals they read change — not on any async event.

// Signal-based component — fine-grained reactivity, no zone.js overhead
@Component({
  selector: 'app-terminal-status',
  template: `
    <div [class.offline]="terminal().status === 'offline'">
      {{ terminal().id }} — {{ terminal().status }}
    </div>
  `,
  changeDetection: ChangeDetectionStrategy.OnPush,
})
export class TerminalStatusComponent {
  terminal = input.required<Terminal>(); // signal input

  // This component only re-renders when the terminal signal value changes
  // Not on keyboard events. Not on mouse moves. Only on terminal changes.
}

Deferrable views — Angular 17’s built-in lazy rendering

<!-- @defer — Angular 17+ native lazy rendering with granular control -->

<!-- Load the analytics chart only when the user scrolls near it -->
@defer (on viewport) {
<app-terminal-analytics [terminalId]="terminalId" />
} @placeholder {
<div class="chart-placeholder">Loading analytics...</div>
} @loading (minimum 200ms) {
<app-skeleton-chart />
} @error {
<app-error-state message="Failed to load analytics" />
}

<!-- Load secondary content only when the browser is idle -->
@defer (on idle) {
<app-related-incidents [terminalId]="terminalId" />
}

<!-- Load on interaction — only when user clicks a trigger -->
@defer (on interaction(expandBtn)) {
<app-terminal-history [terminalId]="terminalId" />
}

Layer Six: React Performance — Framework-Specific Techniques

React.memo — preventing unnecessary re-renders

// ❌ Without memo — re-renders every time the parent re-renders
// even if terminal prop has not changed
function TerminalCard({ terminal }: { terminal: Terminal }) {
  return (
    <div className={`card ${terminal.status}`}>
      <h3>{terminal.id}</h3>
      <span>{terminal.status}</span>
    </div>
  );
}

// ✅ With memo — only re-renders when terminal prop changes by reference
const TerminalCard = React.memo(function TerminalCard({
  terminal
}: { terminal: Terminal }) {
  return (
    <div className={`card ${terminal.status}`}>
      <h3>{terminal.id}</h3>
      <span>{terminal.status}</span>
    </div>
  );
});

// ✅ Custom comparison for complex objects
const TerminalCard = React.memo(
  function TerminalCard({ terminal }: { terminal: Terminal }) {
    return <div>{terminal.id} — {terminal.status}</div>;
  },
  (prev, next) => prev.terminal.id === next.terminal.id
    && prev.terminal.status === next.terminal.status
    // Only re-render when id or status changes — ignore other fields
);

useMemo and useCallback — stabilising references

// ❌ New function reference on every render — child with React.memo re-renders anyway
function TerminalList({ terminals }: { terminals: Terminal[] }) {
  // New function reference every render — breaks memo on children
  const handleResolve = (id: string) => {
    resolveTerminalIssue(id);
  };

  // Expensive computation runs on every render
  const offlineCount = terminals.filter(t => t.status === 'offline').length;

  return (
    <div>
      <h2>{offlineCount} terminals offline</h2>
      {terminals.map(t => (
        <TerminalCard key={t.id} terminal={t} onResolve={handleResolve} />
      ))}
    </div>
  );
}

// ✅ Stable references — React.memo on children works correctly
function TerminalList({ terminals }: { terminals: Terminal[] }) {
  // Stable function reference — only changes if resolveTerminalIssue changes
  const handleResolve = useCallback((id: string) => {
    resolveTerminalIssue(id);
  }, []); // empty deps — function identity never changes

  // Computed value only recalculates when terminals changes
  const offlineCount = useMemo(
    () => terminals.filter(t => t.status === 'offline').length,
    [terminals]
  );

  return (
    <div>
      <h2>{offlineCount} terminals offline</h2>
      {terminals.map(t => (
        <TerminalCard key={t.id} terminal={t} onResolve={handleResolve} />
      ))}
    </div>
  );
}

Virtualization — rendering only what is visible

For long lists — terminal dashboards with hundreds of devices, ticket lists with thousands of entries — rendering every item at once is expensive regardless of how well change detection is managed. Virtualization renders only the items currently visible in the viewport.

// React — react-window for virtualised lists
import { FixedSizeList } from 'react-window';

function TerminalList({ terminals }: { terminals: Terminal[] }) {
  const Row = ({ index, style }: { index: number; style: React.CSSProperties }) => (
    <div style={style}>  {/* style must be applied — sets position for the row */}
      <TerminalCard terminal={terminals[index]} />
    </div>
  );

  return (
    <FixedSizeList
      height={600}          // visible area height
      itemCount={terminals.length}
      itemSize={80}         // height of each row in pixels
      width="100%"
    >
      {Row}
    </FixedSizeList>
  );
}
// Renders ~8 items instead of 1000. Scrolls through 1000.
// DOM node count stays constant regardless of list length.
// Angular — CDK virtual scrolling
import { ScrollingModule } from '@angular/cdk/scrolling';

@Component({
  template: `
    <cdk-virtual-scroll-viewport itemSize="80" style="height: 600px;">
      <app-terminal-card
        *cdkVirtualFor="let terminal of terminals; trackBy: trackById"
        [terminal]="terminal"
      >
      </app-terminal-card>
    </cdk-virtual-scroll-viewport>
  `,
})
export class TerminalListComponent {
  trackById = (i: number, t: Terminal) => t.id;
}

Reaching 100 on Lighthouse — The Specific Targets

Lighthouse scores across five categories: Performance, Accessibility, Best Practices, SEO, and PWA. Reaching 100 across all of them requires addressing each specifically. Here is what each category penalizes and how to address it.

Performance — the Core Web Vitals

The performance score is driven by three Core Web Vitals:

LCP (Largest Contentful Paint) — target under 2.5 seconds The time until the largest visible element is rendered.

  • Preload the LCP image with fetchpriority="high"
  • Serve images in WebP or AVIF
  • Inline critical CSS
  • Use a CDN — network latency is the largest single contributor to LCP

CLS (Cumulative Layout Shift) — target under 0.1 How much content shifts unexpectedly during loading.

  • Always specify width and height on images — the single most common CLS source
  • Reserve space for dynamically loaded content (ads, banners, async components)
  • Avoid inserting content above existing content after load
  • Use CSS aspect-ratio for containers with dynamic images

INP (Interaction to Next Paint) — target under 200ms How quickly the page responds to user interaction.

  • Move expensive work off the main thread with Web Workers
  • Debounce high-frequency event handlers
  • Use scheduler.yield() to break long tasks
  • Avoid synchronous operations in event handlers
// Breaking a long task to keep the main thread responsive
async function processLargeDataset(data: TerminalData[]) {
  const results = [];

  for (let i = 0; i < data.length; i++) {
    results.push(processItem(data[i]));

    // Yield to the browser every 50 items — lets it handle user input
    if (i % 50 === 0) {
      await scheduler.yield(); // or: await new Promise(r => setTimeout(r, 0))
    }
  }

  return results;
}

Font optimization — the often-missed performance drain

/* font-display: swap — show fallback font immediately, swap when custom font loads */
/* Prevents invisible text during font load (FOIT) */
@font-face {
  font-family: 'Inter';
  src: url('/fonts/inter-var.woff2') format('woff2');
  font-weight: 100 900;
  font-display: swap;
}

/* Size-adjust — reduce layout shift when fallback font swaps to custom font */
@font-face {
  font-family: 'Inter-Fallback';
  src: local('Arial');
  ascent-override: 90%;
  descent-override: 22%;
  line-gap-override: 0%;
  size-adjust: 107%;
}

Eliminating render-blocking resources checklist

<!-- The complete optimized head — what 100 Performance looks like -->
<head>
  <meta charset="UTF-8" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />

  <!-- Preconnect to critical origins -->
  <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />
  <link rel="dns-prefetch" href="//api.company.com" />

  <!-- Inline critical CSS — zero network round trip -->
  <style>
    /* critical above-fold styles */
  </style>

  <!-- Preload critical assets -->
  <link rel="preload" href="/fonts/inter-var.woff2" as="font" crossorigin />
  <link rel="preload" href="/hero.webp" as="image" fetchpriority="high" />

  <!-- Non-critical CSS loaded async — does not block rendering -->
  <link
    rel="preload"
    href="/app.css"
    as="style"
    onload="this.onload=null;this.rel='stylesheet'"
  />
  <noscript><link rel="stylesheet" href="/app.css" /></noscript>

  <!-- SEO and meta -->
  <title>Application Title</title>
  <meta name="description" content="Description under 160 characters" />
</head>

<body>
  <div id="root"></div>

  <!-- All JavaScript deferred — never blocks HTML parsing -->
  <script type="module" src="/main.js"></script>
</body>

Service Worker — for the PWA score and offline capability

// Angular — register service worker in app.module.ts
import { ServiceWorkerModule } from '@angular/service-worker';

@NgModule({
  imports: [
    ServiceWorkerModule.register('ngsw-worker.js', {
      enabled: environment.production,
      registrationStrategy: 'registerWhenStable:30000'
    })
  ]
})

// ngsw-config.json — what to cache and how
{
  "index": "/index.html",
  "assetGroups": [
    {
      "name": "app",
      "installMode": "prefetch",
      "resources": {
        "files": ["/favicon.ico", "/index.html", "/*.css", "/*.js"]
      }
    },
    {
      "name": "assets",
      "installMode": "lazy",
      "updateMode": "prefetch",
      "resources": {
        "files": ["/assets/**", "/*.(svg|cur|jpg|jpeg|png|webp|gif|otf|ttf|woff|woff2|ani)"]
      }
    }
  ]
}

The Audit Workflow — How to Actually Reach 100

Reaching and sustaining a high Lighthouse score is not a one-time activity. It is a workflow:

## Performance audit workflow

1. Run Lighthouse in incognito mode
   — Extensions affect scores. Always use incognito.
   — Run three times and take the median — scores vary by ~5 points per run.

2. Read the Opportunities section first
   — These are the highest-impact items with estimated savings.
   — Address the largest estimated savings first.

3. Fix layer by layer — bottom up
   — Network: enable compression (gzip/brotli), use a CDN
   — HTML: remove render-blocking resources
   — CSS: inline critical, async-load the rest
   — Images: convert to WebP/AVIF, add dimensions, add lazy loading
   — JavaScript: analyze bundle, code-split routes, tree-shake dependencies
   — Framework: OnPush / React.memo, trackBy / key, virtualize long lists

4. Check Core Web Vitals in the field
   — Lighthouse is a lab test. CrUX (Chrome User Experience Report)
   shows real-world data. Both matter.
   — PageSpeed Insights shows both lab and field data together.

5. Set a performance budget — enforce it in CI
   — A score that nobody is watching will drift.
// Angular — performance budget in angular.json
{
  "configurations": {
    "production": {
      "budgets": [
        {
          "type": "initial",
          "maximumWarning": "500kb",
          "maximumError": "1mb"    // build fails if initial bundle exceeds 1mb
        },
        {
          "type": "anyComponentStyle",
          "maximumWarning": "4kb",
          "maximumError": "8kb"
        }
      ]
    }
  }
}

Conclusion

Performance is not a score. It is the experience a user has when they open your application on a mid-range phone on a 4G connection at the end of a long day. The Lighthouse score is a proxy for that experience — a useful proxy, but a proxy. The goal is the experience, not the number.

The path to that experience runs through every layer of the stack, starting at the bottom. Semantic HTML that does not block rendering. CSS that animates only composited properties. Images in modern formats with correct dimensions. JavaScript bundles that contain only what is needed now. Framework change detection that runs only when something has actually changed.

The next time you write an animation, check which property you are animating. The next time you add an image, add its width and height. The next time you add a third-party script, check whether it needs to be render-blocking. These are seconds of thought at write time. They are minutes of Lighthouse investigation at audit time. Make the decision once, at the right moment, and move on.