WebSockets vs HTTP Polling in Angular

Introduction

A dashboard that needs to show live order statuses. A support platform where agents need to see new tickets the moment they arrive. A monitoring system where a reading that is thirty seconds old is worse than no reading at all. A collaborative tool where two users editing the same record simultaneously need to see each other’s changes.

These requirements share a common problem: the server has information the client needs, and the client does not know when that information changes. The request-response model — the client asks, the server answers — does not fit. The server needs to be able to tell the client something happened, without the client first asking.

There are three approaches to this problem that matter in practice: HTTP polling, Server-Sent Events, and WebSockets. They sit at different points on the basis of complexity and capability. Understanding all three — what each one is, what it costs, what it is good for — is what lets you reach for the right one when the problem is in front of you.

How Does the Server Tell the Client Something Changed

HTTP is a request-response protocol. The client initiates every exchange. The server responds to requests but cannot initiate contact with the client. This is the fundamental constraint that real-time web development is working around.

If a new support ticket arrives on the server, the server cannot walk up to your browser and tell you. Your browser is not listening for incoming connections — it is behind a firewall, behind a router, with a dynamic IP address. The server has no way to reach it directly.

So we use one of three workarounds:

HTTP Polling — the client asks the server repeatedly on a fixed interval. “Do you have anything new?” “No.” Wait three seconds. “Do you have anything new?” “No.” Wait three seconds. “Do you have anything new?” “Yes, here it is.” Simple but wasteful — most requests return nothing.

Server-Sent Events — the client makes one HTTP request and keeps the connection open. The server streams events through that connection whenever it has something to say. One-directional: server to client only.

WebSockets — the client makes a special HTTP request that upgrades the connection to a persistent, bidirectional channel. After the handshake, both sides can send messages at any time. Full-duplex, low-latency, and the right tool when you need real communication rather than just data delivery.

Understanding these three as different strategies for the same problem — how does the server get information to the client proactively — makes every implementation decision downstream clearer.

HTTP Polling: Simple, Understood, and Often the Right Choice

Polling gets dismissed as the naive approach. That is not quite fair. For many use cases, polling is entirely appropriate. The question is whether the polling interval matches the freshness requirement of the data.

A dashboard showing order statistics that are updated by a batch job every fifteen minutes does not need WebSockets. Polling every minute is fine — the data is not going to be more current than the batch job, and a one-minute interval on data that changes every fifteen minutes is perfectly adequate. Using WebSockets here is over-engineering, with real costs: persistent server connections, reconnection logic, event schema coordination, testing complexity.

Polling is the right default unless you have a specific reason it is not good enough.

What polling looks like

Understand what polling actually is at the lowest level:

// The conceptual model
let intervalId: ReturnType<typeof setInterval>;

function startPolling() {
  intervalId = setInterval(async () => {
    const response = await fetch('/api/tickets');
    const tickets = await response.json();
    renderTickets(tickets);
  }, 5000); // Every 5 seconds
}

function stopPolling() {
  clearInterval(intervalId);
}

Set an interval. Make a request. Render the response. Clear the interval when done. That is the entire model.

HTTP Polling in Angular with RxJS

In Angular, polling is most naturally expressed with RxJS’s interval and switchMap:

// polling.service.ts
import { Injectable, inject } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import {
  Observable,
  interval,
  switchMap,
  startWith,
  shareReplay,
  distinctUntilChanged,
  retry,
  catchError,
  EMPTY,
} from 'rxjs';

@Injectable({ providedIn: 'root' })
export class TicketPollingService {
  private http = inject(HttpClient);

  // Poll every 10 seconds, emit immediately on subscribe
  tickets$: Observable<Ticket[]> = interval(10_000).pipe(
    startWith(0), // Emit immediately on subscribe, don't wait 10s
    switchMap(() =>
      // Cancel previous in-flight request if interval fires again
      this.http.get<Ticket[]>('/api/tickets').pipe(
        catchError((error) => {
          console.error('Poll failed:', error);
          return EMPTY; // Don't kill the outer stream on error
        })
      )
    ),
    distinctUntilChanged(
      // Only emit if data actually changed
      (a, b) => JSON.stringify(a) === JSON.stringify(b)
    ),
    shareReplay(1) // Share the latest value with late subscribers
  );
}

This is not just setInterval with Angular on top. The RxJS chain adds several important behaviours:

startWith(0) means the first emission happens immediately on subscribe rather than after the first interval. The user sees data right away.

switchMap means if a poll request is still in flight when the next interval fires, the previous request is cancelled. You never queue up multiple simultaneous requests.

distinctUntilChanged means components only re-render when the data has actually changed. If the server returns the same list, nothing happens.

shareReplay(1) means multiple components can subscribe to the same polling Observable without each triggering its own polling loop.

catchError returning EMPTY means a failed request does not kill the stream — the next interval will trigger another attempt.

Using polling in a component

// ticket-dashboard.component.ts
import { Component, inject, OnDestroy } from '@angular/core';
import { CommonModule, AsyncPipe } from '@angular/common';
import { Subscription } from 'rxjs';
import { TicketPollingService } from './ticket-polling.service';

@Component({
  selector: 'app-ticket-dashboard',
  standalone: true,
  imports: [CommonModule, AsyncPipe],
  template: `
    @if (tickets$ | async; as tickets) {
      <div class="ticket-count">{{ tickets.length }} tickets</div>
      @for (ticket of tickets; track ticket.id) {
        <app-ticket-card [ticket]="ticket" />
      }
    } @else {
      <app-skeleton />
    }
  `,
})
export class TicketDashboardComponent {
  private pollingService = inject(TicketPollingService);
  tickets$ = this.pollingService.tickets$;
  // The async pipe subscribes and unsubscribes automatically
  // When the component is destroyed, polling stops
}

The async pipe manages the subscription lifecycle. When the component is destroyed, the pipe unsubscribes, and because tickets$ is shared via shareReplay, the polling continues if other components are still subscribed.

Configurable polling intervals

@Injectable({ providedIn: 'root' })
export class ConfigurablePollingService {
  private http = inject(HttpClient);

  poll<T>(url: string, intervalMs: number = 10_000): Observable<T> {
    return interval(intervalMs).pipe(
      startWith(0),
      switchMap(() => this.http.get<T>(url).pipe(catchError(() => EMPTY))),
      shareReplay(1)
    );
  }
}

// Usage
tickets$ = this.polling.poll<Ticket[]>('/api/tickets', 15_000);
stats$ = this.polling.poll<DashboardStats>('/api/stats', 60_000);

Long polling: a halfway house

Long polling is a variation where the server holds the request open until it has something new to send, then responds. The client immediately sends a new request. This reduces wasted requests when data changes infrequently:

@Injectable({ providedIn: 'root' })
export class LongPollingService {
  private http = inject(HttpClient);

  longPoll<T>(url: string): Observable<T> {
    const poll = (): Observable<T> =>
      this.http
        .get<T>(url, {
          // Signal to the server this is a long-poll request
          params: { longPoll: 'true' },
          // Generous timeout — server holds the request open
        })
        .pipe(
          catchError((error) => {
            // On error, wait briefly then try again
            return timer(2000).pipe(switchMap(() => poll()));
          })
        );

    return poll().pipe(
      // Recursively restart after each response
      expand(() => poll()),
      // Recursive expand would emit the initial value twice without skip
      skip(0)
    );
  }
}

Long polling is largely superseded by SSE and WebSockets in modern applications. But it exists in production systems and understanding it matters when you encounter it.

When polling is the wrong tool

Polling breaks down when:

  • The interval that is technically adequate is wasteful at scale. If you have ten thousand users polling every five seconds, that is two thousand requests per second hitting your server for data that changes infrequently. The load is proportional to users, not to how often data changes.
  • Latency matters. A notification that arrives up to five seconds late is not a notification — it is a delayed history entry. If the user experience requires seeing something happen in real time, polling’s maximum latency is the interval, and that is baked in.
  • You need bidirectional communication. Polling is one-directional. The client can send data through the request body, but you cannot send and receive on the same channel.

Server-Sent Events

Server-Sent Events (SSE) sit between polling and WebSockets. The client makes one HTTP request and keeps the connection open. The server sends events through that connection whenever it has data. The connection is persistent but one-directional — only server to client.

SSE has several advantages over WebSockets for the right use cases:

  • Built on HTTP — works through standard proxies, load balancers, and firewalls that might block WebSocket connections
  • Automatic reconnection built into the browser specification
  • Simpler to implement on both sides
  • HTTP/2 supports multiplexing SSE connections, which matters at scale

The limitation is the one-directionality. If the user needs to send data back — acknowledging a notification, requesting specific data, sending a message — you need a separate HTTP request. With WebSockets, both directions flow through the same connection.

SSE in Angular

Angular does not have a built-in SSE client, but the browser’s EventSource API is straightforward to wrap in an Observable:

// sse.service.ts
import { Injectable } from '@angular/core';
import { Observable, fromEvent, merge, Subject } from 'rxjs';
import { map, shareReplay, takeUntil } from 'rxjs/operators';

@Injectable({ providedIn: 'root' })
export class SseService {
  connect<T>(url: string): Observable<T> {
    return new Observable<T>((observer) => {
      const eventSource = new EventSource(url, {
        withCredentials: true, // Include cookies for authentication
      });

      eventSource.onmessage = (event) => {
        try {
          observer.next(JSON.parse(event.data) as T);
        } catch {
          observer.error(new Error('Failed to parse SSE message'));
        }
      };

      eventSource.onerror = (error) => {
        // EventSource automatically reconnects — onerror fires but
        // connection will retry. Only complete if it's a fatal error.
        if (eventSource.readyState === EventSource.CLOSED) {
          observer.error(error);
        }
        // If not closed, EventSource is in CONNECTING state — it's retrying
      };

      // Cleanup function — called when the Observable is unsubscribed
      return () => {
        eventSource.close();
      };
    });
  }

  // Connect and listen to a specific named event
  connectToEvent<T>(url: string, eventName: string): Observable<T> {
    return new Observable<T>((observer) => {
      const eventSource = new EventSource(url, { withCredentials: true });

      const handler = (event: MessageEvent) => {
        try {
          observer.next(JSON.parse(event.data) as T);
        } catch {
          observer.error(new Error(`Failed to parse ${eventName} event`));
        }
      };

      eventSource.addEventListener(eventName, handler);

      eventSource.onerror = () => {
        if (eventSource.readyState === EventSource.CLOSED) {
          observer.error(new Error('SSE connection closed'));
        }
      };

      return () => {
        eventSource.removeEventListener(eventName, handler);
        eventSource.close();
      };
    });
  }
}

Usage in a service:

@Injectable({ providedIn: 'root' })
export class NotificationService {
  private sse = inject(SseService);

  // Server pushes notifications when they occur
  notifications$ = this.sse
    .connect<Notification>('/api/notifications/stream')
    .pipe(shareReplay(1));

  // Server pushes specific event types
  ticketCreated$ = this.sse.connectToEvent<Ticket>(
    '/api/tickets/stream',
    'ticket.created'
  );
}

When SSE is the right choice

SSE fits well when:

  • The server pushes updates to the client and the client does not need to send data back on the same connection
  • You want automatic reconnection with no extra implementation work
  • You are behind load balancers or proxies that might interfere with WebSocket upgrades
  • HTTP/2 is available, which removes SSE’s connection limit concerns

A notification system is the canonical SSE use case. The server pushes a notification when something happens. The client receives it and shows it. If the user dismisses the notification, that is a separate HTTP request. The SSE connection is not involved.

WebSockets: Full-Duplex, Persistent, Real-Time

WebSockets are the most powerful and most complex option. They establish a persistent, bidirectional connection between client and server. After the initial HTTP handshake, both sides can send messages at any time, with minimal overhead — no HTTP headers on every message, no request-response cycle.

The WebSocket connection lifecycle:

  1. Client sends an HTTP request with an Upgrade: websocket header
  2. Server responds with 101 Switching Protocols
  3. The TCP connection stays open — now a WebSocket connection
  4. Both sides send messages freely
  5. Either side closes the connection when done

This is fundamentally different from HTTP. There is no request and response in the traditional sense. There are messages, in both directions, whenever either side has something to say.

Building a production WebSocket service in Angular

The RxJS webSocket function from rxjs/webSocket wraps the browser’s WebSocket API in an Observable. It handles the connection, the message queue, and provides a Subject interface for sending messages:

// websocket.service.ts
import { Injectable, inject, OnDestroy, isDevMode } from '@angular/core';
import {
  webSocket,
  WebSocketSubject,
  WebSocketSubjectConfig,
} from 'rxjs/webSocket';
import {
  Observable,
  Subject,
  timer,
  EMPTY,
  switchMap,
  retryWhen,
  tap,
  share,
  takeUntil,
  filter,
  map,
  catchError,
} from 'rxjs';

// The shape of every message over the WebSocket
export interface WsMessage<T = unknown> {
  event: string;
  payload: T;
  timestamp: string;
  requestId?: string; // For request-response patterns over WebSocket
}

// Specific message types — define these for every event in the schema
export interface TicketCreatedPayload {
  ticket: Ticket;
}

export interface TicketStatusChangedPayload {
  ticketId: string;
  previousStatus: TicketStatus;
  newStatus: TicketStatus;
  changedBy: string;
}

export interface AgentTypingPayload {
  ticketId: string;
  agentId: string;
  agentName: string;
}

@Injectable({ providedIn: 'root' })
export class WebSocketService implements OnDestroy {
  private socket$?: WebSocketSubject<WsMessage>;
  private readonly destroy$ = new Subject<void>();
  private readonly RECONNECT_INTERVAL = 3000;
  private readonly MAX_RECONNECT_ATTEMPTS = 10;

  private getUrl(): string {
    const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
    const host = window.location.host;
    return `${protocol}//${host}/ws`;
  }

  private connect(): WebSocketSubject<WsMessage> {
    if (!this.socket$ || this.socket$.closed) {
      const config: WebSocketSubjectConfig<WsMessage> = {
        url: this.getUrl(),
        // Called when the connection opens
        openObserver: {
          next: () => {
            if (isDevMode()) {
              console.log('[WS] Connection established');
            }
            this.connectionStatus$.next('connected');
          },
        },
        // Called when the connection closes
        closeObserver: {
          next: (closeEvent) => {
            if (isDevMode()) {
              console.log(
                '[WS] Connection closed',
                closeEvent.code,
                closeEvent.reason
              );
            }
            this.connectionStatus$.next('disconnected');
            this.socket$ = undefined;
          },
        },
      };

      this.socket$ = webSocket<WsMessage>(config);
    }
    return this.socket$;
  }

  // Connection status for UI feedback
  readonly connectionStatus$ = new Subject<
    'connected' | 'disconnected' | 'reconnecting'
  >();

  // The main message stream — handles reconnection automatically
  private messages$: Observable<WsMessage> = this.connect().pipe(
    retryWhen((errors) =>
      errors.pipe(
        tap((err) => {
          this.connectionStatus$.next('reconnecting');
          if (isDevMode()) console.log('[WS] Reconnecting...', err);
        }),
        // Wait before reconnecting, with exponential backoff
        switchMap((_, attempt) => {
          const delay = Math.min(
            this.RECONNECT_INTERVAL * Math.pow(1.5, attempt),
            30_000 // Max 30 second delay
          );
          return timer(delay);
        })
      )
    ),
    catchError((err) => {
      console.error('[WS] Fatal error, reconnection exhausted:', err);
      return EMPTY;
    }),
    takeUntil(this.destroy$),
    share() // Share one connection across all subscribers
  );

  // Listen for a specific event type — typed
  on<T>(event: string): Observable<T> {
    return this.messages$.pipe(
      filter((msg) => msg.event === event),
      map((msg) => msg.payload as T)
    );
  }

  // Send a message — typed
  send<T>(event: string, payload: T): void {
    const message: WsMessage<T> = {
      event,
      payload,
      timestamp: new Date().toISOString(),
      requestId: crypto.randomUUID(),
    };

    const socket = this.connect();
    socket.next(message);
  }

  ngOnDestroy(): void {
    this.destroy$.next();
    this.destroy$.complete();
    this.socket$?.complete();
  }
}

Reconnection with exponential backoff

The reconnection logic in the WebSocketService above uses exponential backoff — each reconnection attempt waits longer than the previous one. This is important because if the server goes down, you do not want hundreds of clients hammering it with reconnection attempts simultaneously. The backoff spreads the load:

// The backoff progression (with attempt starting at 0):
// Attempt 0: 3000ms  (3s)
// Attempt 1: 4500ms  (4.5s)
// Attempt 2: 6750ms  (6.75s)
// Attempt 3: 10125ms (~10s)
// Attempt 4: 15187ms (~15s)
// Capped at 30000ms (30s) for all subsequent attempts

const delay = Math.min(RECONNECT_INTERVAL * Math.pow(1.5, attempt), 30_000);

The cap at 30 seconds prevents a scenario where a brief outage means clients wait minutes to reconnect after the server recovers.

Handling authentication with WebSockets

WebSockets do not send headers after the initial handshake. Authentication must happen at connection time — either through the URL or through the initial handshake request:

private getUrl(): string {
  const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
  const host = window.location.host;
  const token = this.authService.getToken();

  // Option 1: Token in URL (visible in logs — less secure)
  return `${protocol}//${host}/ws?token=${token}`;
}

A better approach: authenticate via an HTTP endpoint first, get a short-lived WebSocket token, then use that:

private async connect(): Promise<WebSocketSubject<WsMessage>> {
  // Get a short-lived WebSocket-specific token
  const { wsToken } = await firstValueFrom(
    this.http.post<{ wsToken: string }>('/api/auth/ws-token', {})
  );

  const url = `${this.wsBaseUrl}?token=${wsToken}`;
  return webSocket<WsMessage>(url);
}

The WebSocket token is short-lived (30–60 seconds) and single-use. It is less sensitive than the main JWT because it expires quickly and cannot be used for HTTP requests.

Alternatively, send the authentication as the first message after connecting:

// After connection established, send auth as first message
this.socket$ = webSocket<WsMessage>(this.getUrl());

// Subscribe to trigger connection, then authenticate
this.socket$.pipe(take(1)).subscribe(() => {
  this.socket$!.next({
    event: 'auth',
    payload: { token: this.authService.getToken() },
    timestamp: new Date().toISOString(),
  });
});

Connection Status UI: Telling the User What Is Happening

One of the most important things about real-time connections is making their state visible. A user staring at a dashboard that has silently lost its WebSocket connection, showing data that is ten minutes old, has a worse experience than one who sees a clear indicator and knows the connection is recovering.

// connection-status.component.ts
@Component({
  selector: 'app-connection-status',
  standalone: true,
  imports: [AsyncPipe, CommonModule],
  template: `
    @switch (status$ | async) {
      @case ('reconnecting') {
        <div class="status-bar status-bar--warning">
          <span class="status-bar__icon">↻</span>
          Reconnecting...
        </div>
      }
      @case ('disconnected') {
        <div class="status-bar status-bar--error">
          <span class="status-bar__icon">✕</span>
          Disconnected — data may be outdated
          <button (click)="retry()">Retry now</button>
        </div>
      }
      @case ('connected') {
        <!-- Show nothing — connected is the expected state -->
      }
    }
  `,
})
export class ConnectionStatusComponent {
  private ws = inject(WebSocketService);
  status$ = this.ws.connectionStatus$;

  retry(): void {
    // Force reconnection
    this.ws.reconnect();
  }
}

Show the reconnecting state. Show the disconnected state. When connected — show nothing. The absence of an indicator is itself the indicator that everything is fine. This is the principle: design the normal state to be invisible, design the abnormal state to be clear.

The Decision Framework: Polling, SSE, or WebSockets

After building both, here is the framework I use:

Start with polling if:

  • Data changes on a known schedule (every minute, every hour)
  • Latency of up to the polling interval is acceptable to the user
  • The server load from polling is manageable at your user scale
  • The team is small or the timeline is tight — polling is the fastest to implement correctly

Move to SSE if:

  • The server needs to push updates to the client without the client asking
  • Updates happen at unpredictable times
  • You need better latency than polling provides
  • You do not need the client to send data back on the same connection
  • You are in an environment with WebSocket-hostile proxies or load balancers

Use WebSockets if:

  • You need bidirectional communication — client sends data, server sends data, on the same connection
  • Latency must be minimal — under a second, ideally under 100ms
  • The volume of messages makes per-message HTTP overhead significant
  • You need to maintain shared state across clients (collaborative editing, live cursor positions, multiplayer)
  • The use case is safety-critical, financial, or otherwise intolerant of polling latency

The clearest signal for WebSockets is when the user needs to both watch and act — not just receive data, but send it back, and have that interaction feel instant. The fleet operators in CarCam needed to see vehicle positions and acknowledge alerts. That bidirectionality, combined with the latency requirement, made WebSockets the only viable option.

Conclusion

HTTP polling, Server-Sent Events, and WebSockets are not competing technologies. They are different tools for different points on the spectrum of real-time requirements.

Polling is appropriate for more situations than its reputation suggests. If the data changes every few minutes and the polling interval matches, polling is simpler to implement, simpler to debug, and simpler to scale than the alternatives. Do not reach for WebSockets because polling feels unsophisticated.

SSE is underused. For server-to-client push where bidirectionality is not needed, SSE is often the most appropriate choice — it sits exactly between polling and WebSockets in complexity and capability, and it gets the HTTP infrastructure for free.

WebSockets are the right choice when the problem genuinely requires them: low latency, bidirectional communication, high message volume. The complexity cost is real — connection lifecycle management, reconnection logic, authentication at the protocol level, event schema coordination.

RxJS is what makes WebSockets manageable in Angular. Wrapping the connection in an Observable, filtering by event type, composing streams with operators — this is the pattern that lets components subscribe to specific event types without knowing anything about the underlying connection. The complexity of WebSocket management lives in the service layer. The components just receive data.

Know all three options. Reach for the simplest one that fits the requirement. Build the more complex one when the requirement demands it.

Resources