A Frontend Developer's Guide to Backend

Introduction

Frontend developers who decide to learn backend usually make the same mistake. They install Node.js, follow a tutorial, build a REST API that creates and reads todos, and come away feeling like they understand backend development. Then they join a team working on a real backend system and discover that what they learned in the tutorial covers roughly five percent of what production backend work involves.

The gap is not in the syntax. JavaScript is JavaScript. The gap is in the mental model — the understanding of what a server actually does, what can go wrong at scale, what the database is doing under the hood, and why experienced backend developers make the specific choices they make. Those choices look arbitrary or overcautious until you understand the problems they are preventing.

I want to give a different kind of guide. Not a tutorial that walks through building a specific application, but a map of the terrain — the concepts, the mental models, the structural ideas that separate backend developers who understand what they are doing from those who are following patterns they copied from Stack Overflow. And I want to write it specifically for frontend developers, because the background matters. The things you already understand about JavaScript, about asynchrony, about data flow — they transfer, and knowing how they transfer makes the learning faster.

Start Here: The Mental Model Shift

Before any code, there is a conceptual shift that frontend developers need to make, because the mental model of backend is genuinely different from the mental model of frontend in ways that are easy to underestimate.

On the frontend, you are building for a single user at a time. A component renders for one person. The state in that component belongs to that session. If something is slow, one person notices. If something breaks, one person is affected. The unit of concern is the individual user experience.

On the backend, you are building for many users simultaneously. A single endpoint might be handling hundreds or thousands of concurrent requests at any moment. State is shared, or has to be carefully partitioned so it is not. If something is slow, it is slow for everyone. If something breaks — a database query that takes ten seconds instead of ten milliseconds — it can take down the entire system for every user at once. The unit of concern is the system under load.

This shift in thinking is more important than any specific technology. A frontend developer who brings the single-user mental model to backend will write code that works in development, works in testing, and collapses under real production load in ways they did not anticipate because they were never thinking about concurrency, shared state, or resource contention.

The first question to ask when writing backend code is not “does this work?” It is “what happens when a thousand people do this simultaneously?” That question, asked consistently, is what separates backend thinking from frontend thinking applied to a server.

Phase One: The HTTP Layer — What You Think You Know and What You Don’t

Frontend developers interact with HTTP constantly. You write fetch calls, you read response status codes, you handle JSON. You know HTTP exists and roughly what it does.

But knowing HTTP from the client side and understanding HTTP well enough to build reliable server-side APIs are meaningfully different levels of knowledge. Here is what you need to go deeper on.

Status codes as contracts

Most frontend developers know 200, 404, and 500. A production-quality API requires a much more deliberate vocabulary:

2xx — Success
  200 OK           — request succeeded, response contains result
  201 Created      — resource was created, Location header points to it
  204 No Content   — succeeded, nothing to return (DELETE, some PUTs)

3xx — Redirection
  301 Moved Permanently  — client should update bookmarks
  302 Found              — temporary redirect
  304 Not Modified       — cached version is still valid (ETag/If-None-Match)

4xx — Client error (the client did something wrong)
  400 Bad Request        — malformed request, validation failure
  401 Unauthorized       — not authenticated (misleading name)
  403 Forbidden          — authenticated but not authorised
  404 Not Found          — resource does not exist
  409 Conflict           — request conflicts with current state
  422 Unprocessable      — validation failed (often preferred over 400)
  429 Too Many Requests  — rate limit exceeded

5xx — Server error (you did something wrong)
  500 Internal Server Error  — unhandled exception, the catch-all
  502 Bad Gateway            — upstream service failed
  503 Service Unavailable    — overloaded or in maintenance

The distinction between 401 and 403 matters in real systems — they require different client behaviour. The distinction between 400 and 422 is a convention worth understanding. Using 500 for everything obscures whether the problem is in your code or in a dependency.

// An Express route that uses status codes as contracts, not just signals
router.post(
  '/tickets',
  authenticate,
  validateBody(createTicketSchema),
  async (req: AuthenticatedRequest, res: Response, next: NextFunction) => {
    try {
      const ticket = await ticketService.create(req.body, req.user);
      // 201 Created — resource was created, Location tells client where to find it
      res
        .status(201)
        .location(`/api/v1/tickets/${ticket.id}`)
        .json({ data: ticket });
    } catch (error) {
      next(error); // Validation → 422, Duplicate → 409, No permission → 403
    }
  }
);

HTTP headers — the part tutorials skip

HTTP headers carry a significant amount of the protocol’s meaning and tutorials almost universally skip them. The ones you need to understand:

Caching headersCache-Control, ETag, Last-Modified, If-None-Match. A well-implemented caching strategy can reduce your server load by orders of magnitude. A poorly understood one serves stale data to users or makes your CDN useless.

Content negotiationContent-Type, Accept. Your API should declare what it produces and validate what it receives.

CORS headersAccess-Control-Allow-Origin and friends. As a frontend developer you have probably fought CORS errors. Understanding what the server is doing when it sets these headers makes the problem tractable rather than mysterious.

Authentication headersAuthorization: Bearer <token>. Simple to use, important to understand the security implications of how the token is validated on the server side.

The HTTP request lifecycle on the server

A request arrives at your server. Here is what actually happens, which tutorials rarely make explicit:

1. TCP connection established (or reused — keep-alive)
2. TLS handshake (if HTTPS)
3. HTTP request parsed — method, path, headers, body
4. Routing — which handler should process this?
5. Middleware chain — auth, logging, body parsing, rate limiting...
6. Route handler — your business logic
7. Response serialised — headers set, body encoded
8. Response sent
9. Connection kept alive or closed

Understanding this pipeline tells you where different kinds of problems originate. Authentication failures happen in step 5. Routing errors happen in step 4. Serialisation errors happen in step 7. When you understand the pipeline, you know where to look.

Phase Two: Node.js and the Event Loop — What Frontend Actually Gives You

Here is where your frontend background is a genuine advantage, and I want to name it explicitly so you can use it.

Node.js runs on the same V8 JavaScript engine as the browser. The event loop — the single-threaded, non-blocking execution model you learned to reason about in JavaScript — is the same model running your server. The call stack, the microtask queue, the message queue — the mental model from frontend applies directly.

This means you already understand the most surprising thing about Node.js to developers coming from other backend languages: it is single-threaded, yet it handles concurrent requests efficiently through non-blocking I/O.

// This is why Node.js can handle many requests simultaneously
// despite being single-threaded

// When this runs, Node doesn't wait — it registers the callback
// and moves on to handle the next request
app.get('/tickets', async (req, res) => {
  // The await here does NOT block the thread
  // Node registers the database query callback and handles other requests
  // while waiting for the database to respond
  const tickets = await db.query('SELECT * FROM tickets');
  res.json(tickets);
});

What you need to add to your existing mental model:

Blocking the event loop is catastrophic in backend. On the frontend, a brief block might make your UI feel sluggish for one user. On a server, blocking the event loop means every concurrent request waits while you do the blocking thing. CPU-intensive synchronous work — complex computations, synchronous file reads, poorly written loops over large datasets — blocks every user simultaneously.

// ❌ This blocks the event loop — every pending request waits
app.get('/process', (req, res) => {
  const result = heavyCpuComputation(req.body.data); // blocks for 200ms
  res.json(result);
});

// ✅ CPU-intensive work goes to a worker thread
const { Worker } = require('worker_threads');

app.get('/process', async (req, res, next) => {
  try {
    const result = await runInWorkerThread(req.body.data);
    res.json(result);
  } catch (error) {
    next(error);
  }
});

Unhandled promise rejections crash the process — or they should, and in production Node.js, they do. On the frontend, an unhandled promise rejection produces a console warning. On the server, it can take down the process that is serving every user. Every async operation needs error handling.

// The pattern to enforce in every Express route handler
// Always wrap async handlers and call next(error) — unhandled rejections crash the process
async function getTicket(req: Request, res: Response, next: NextFunction) {
  try {
    const ticket = await ticketRepository.findById(req.params.id);

    if (!ticket) {
      throw new NotFoundError(`Ticket ${req.params.id} not found`);
    }

    res.json({ data: ticket });
  } catch (error) {
    // Always call next(error) — never let async errors go unhandled
    next(error);
  }
}

router.get('/tickets/:id', authenticate, getTicket);

Phase Three: Databases — The Most Important Thing Tutorials Skip

The database is where most backend performance problems originate and where most tutorials spend the least time. You can write perfect server code and still have a system that collapses at scale because of database mistakes that are entirely preventable once you understand a few fundamentals.

SQL first — even if you use an ORM

Learn SQL before you learn an ORM. This is not negotiable. An ORM like TypeORM or Prisma is writing SQL on your behalf, and if you do not understand what SQL it is generating, you will write ORM code that produces queries you would never write by hand, and you will not notice until the table has a million rows and suddenly every page load takes four seconds.

-- The kind of query an ORM generates that seems fine
-- until your tickets table has 500,000 rows

-- What TypeORM generates for a naive find with relations:
SELECT ticket.*, comment.*
FROM ticket
LEFT JOIN comment ON comment.ticket_id = ticket.id
WHERE ticket.account_id = $1
-- No LIMIT. Returns every ticket with every comment.
-- 500,000 rows x average 10 comments = 5 million rows across the wire.

-- What you should have written:
SELECT ticket.id, ticket.subject, ticket.status, ticket.created_at
FROM ticket
WHERE ticket.account_id = $1
  AND ticket.status != 'closed'
ORDER BY ticket.created_at DESC
LIMIT 20 OFFSET $2
-- 20 rows. The joins and comments load separately when needed.

The concepts you need to understand in SQL, in order of importance:

Indexes — the single most impactful thing to understand in database performance. An index is a data structure that makes lookups fast at the cost of slower writes and more storage. Every column you filter or sort by in a WHERE or ORDER BY clause should have an index, unless you understand why it doesn’t need one.

-- Without index: full table scan — reads every row
-- With 500,000 rows, this takes seconds
SELECT * FROM ticket WHERE account_id = '123' AND status = 'open';

-- With a composite index: reads only matching rows — milliseconds
CREATE INDEX idx_ticket_account_status ON ticket(account_id, status);

Query execution plans — every serious database has an EXPLAIN command that shows you how the database will execute a query. Learn to read it. Seq Scan means full table scan — usually bad. Index Scan means it used an index — usually good.

EXPLAIN ANALYZE
SELECT * FROM ticket WHERE account_id = '123' AND status = 'open';
-- Read the output. Find the Seq Scans. Add indexes.

Transactions — a transaction is a unit of work that either completes entirely or not at all. If you are modifying multiple tables and one of the operations fails, a transaction ensures you do not end up with partial data.

import { Pool } from 'pg';
const pool = new Pool({ connectionString: config.databaseUrl });

// Without a transaction — dangerous
async function resolveTicketUnsafe(ticketId: string, userId: string) {
  await pool.query(
    `UPDATE ticket SET status = 'resolved', resolved_at = NOW() WHERE id = $1`,
    [ticketId]
  ); // succeeds

  await pool.query(
    `INSERT INTO activity (ticket_id, type, created_by) VALUES ($1, $2, $3)`,
    [ticketId, 'RESOLVED', userId]
  ); // fails — ticket marked resolved but no activity record exists
}

// With a transaction — safe
async function resolveTicketSafe(ticketId: string, userId: string) {
  const client = await pool.connect();
  try {
    await client.query('BEGIN');

    await client.query(
      `UPDATE ticket SET status = 'resolved', resolved_at = NOW() WHERE id = $1`,
      [ticketId]
    );

    await client.query(
      `INSERT INTO activity (ticket_id, type, created_by) VALUES ($1, $2, $3)`,
      [ticketId, 'RESOLVED', userId]
    );

    await client.query('COMMIT');
  } catch (error) {
    await client.query('ROLLBACK');
    throw error; // re-throw so the route handler can pass it to next()
  } finally {
    client.release(); // always release back to the pool
  }
}

N+1 queries — the most common ORM performance mistake. You query for a list of 100 tickets, then for each ticket you query for its comments — that is 101 queries instead of 1.

// ❌ N+1 — 1 query for tickets + 1 query per ticket for comments = 101 queries
const tickets = await ticketRepository.findAll({ accountId });
for (const ticket of tickets) {
  ticket.comments = await commentRepository.findByTicketId(ticket.id);
}

// ✅ JOIN in one query — 1 query total
const result = await pool.query(
  `
  SELECT
    t.id, t.subject, t.status,
    json_agg(c.*) FILTER (WHERE c.id IS NOT NULL) AS comments
  FROM ticket t
  LEFT JOIN comment c ON c.ticket_id = t.id
  WHERE t.account_id = $1
  GROUP BY t.id
`,
  [accountId]
);
const tickets = result.rows;

NoSQL — when and why

SQL databases are the right default for most applications. Use them until you have a specific reason not to. The specific reasons are:

  • Document-oriented data where the schema varies significantly per record (MongoDB)
  • Key-value caching where you need fast reads by a single key (Redis)
  • Time-series data where you are inserting millions of timestamped rows (TimescaleDB, InfluxDB)
  • Graph data where relationships between entities are the primary concern (Neo4j)

Redis deserves special mention because it is almost universally useful as a second database alongside your primary SQL database. Use Redis for session storage, caching, and rate limiting:

import { createClient } from 'redis';
const redis = createClient({ url: config.redisUrl });

// Caching an expensive query in an Express route
router.get('/accounts/:id/tickets', authenticate, async (req, res, next) => {
  try {
    const cacheKey = `account:${req.params.id}:tickets`;
    const cached = await redis.get(cacheKey);

    if (cached) {
      res.json(JSON.parse(cached));
      return;
    }

    const tickets = await ticketRepository.findByAccount(req.params.id);
    await redis.setEx(cacheKey, 300, JSON.stringify(tickets)); // cache 5 min
    res.json({ data: tickets });
  } catch (error) {
    next(error);
  }
});

Phase Four: Authentication and Security — Non-Negotiable Fundamentals

Security is the area where frontend developers moving to backend are most likely to make dangerous mistakes — not out of carelessness but out of not knowing what they do not know. The browser protects you from many attack surfaces. The server does not.

Passwords

Never store plain text passwords. Use bcrypt, scrypt, or Argon2 — algorithms designed specifically for password hashing that are intentionally slow to make brute-force attacks expensive.

import * as bcrypt from 'bcrypt';

async function hashPassword(plain: string): Promise<string> {
  return bcrypt.hash(plain, 12); // 12 = cost factor, higher = slower to brute force
}

// Verifying — bcrypt.compare is timing-safe
// Never use === to compare password hashes
async function verifyPassword(plain: string, hashed: string): Promise<boolean> {
  return bcrypt.compare(plain, hashed);
}

JWT — what they are and what they are not

A JWT is a signed token, not an encrypted one by default. Anyone who holds the token can decode its payload. The signature verifies that the payload was issued by your server — it does not hide the payload’s contents. Do not put sensitive information in JWT payloads.

import jwt from 'jsonwebtoken';

// Signing a JWT at login
const payload = {
  sub: user.id,
  email: user.email, // safe — not sensitive
  role: user.role,
  // Never include: password, credit card numbers, SSN
};

const token = jwt.sign(payload, config.jwtSecret, { expiresIn: '1h' });

// Authentication middleware — runs on every protected route
export function authenticate(
  req: Request,
  res: Response,
  next: NextFunction
): void {
  const token = req.headers.authorization?.split(' ')[1];

  if (!token) {
    res
      .status(401)
      .json({ error: { code: 'UNAUTHORIZED', message: 'No token provided' } });
    return;
  }

  try {
    const decoded = jwt.verify(token, config.jwtSecret) as JwtPayload;
    (req as AuthenticatedRequest).user = decoded;
    next();
  } catch {
    res.status(401).json({
      error: {
        code: 'INVALID_TOKEN',
        message: 'Token is invalid or expired',
      },
    });
  }
}

The OWASP Top Ten — know these

SQL Injection — parameterised queries prevent it. ORMs handle this for you, but understand why:

// ❌ SQL injection — user input directly in the query string
const result = await pool.query(
  `SELECT * FROM ticket WHERE subject LIKE '%${searchTerm}%'`
  // If searchTerm is: '; DROP TABLE ticket; --
  // You have just destroyed your database
);

// ✅ Parameterised query — user input is never interpolated into SQL
const result = await pool.query('SELECT * FROM ticket WHERE subject LIKE $1', [
  `%${searchTerm}%`,
]);

Broken Authentication — use established libraries for auth flows. Do not implement JWT verification, session management, or password reset flows from scratch unless you have done serious security study.

Security Misconfiguration — environment variables in version control, debug mode enabled in production, default credentials left unchanged. Establish a .env.example file and a checklist for production deployments.

Sensitive Data Exposure — never log passwords, tokens, credit card numbers, or health data. Be explicit about what your API returns and verify sensitive fields are not leaking into responses.

Phase Five: API Design — Where Frontend Knowledge Is a Superpower

This is where your frontend background genuinely gives you an advantage that most backend developers do not have.

You know what it is like to consume an API. You know the frustration of an API that returns more data than you need, requiring you to filter client-side. You know the frustration of an API that requires three requests to get data that logically belongs together. You know the confusion of inconsistent naming, of status codes that do not mean what they say, of error responses that do not tell you what went wrong.

That knowledge is exactly what makes a good API designer. Use it.

REST conventions — the parts that matter

Resource-oriented URLs — URLs should identify resources, not actions. Verbs belong in HTTP methods, not in paths.

❌  POST /createTicket
❌  GET  /getTicketById?id=123
❌  POST /resolveTicket

✅  POST   /tickets              — create a ticket
✅  GET    /tickets/123          — get a specific ticket
✅  PATCH  /tickets/123          — partially update a ticket
✅  POST   /tickets/123/resolve  — action on a resource (acceptable exception)
✅  DELETE /tickets/123          — delete a ticket

Consistent response shapes — define a response envelope and stick to it:

// A consistent response shape for every endpoint
interface ApiResponse<T> {
  data: T;
  meta?: { page: number; pageSize: number; total: number };
}

interface ApiError {
  error: {
    code: string; // machine-readable
    message: string; // human-readable
    fields?: Record<string, string>; // validation errors
  };
}

// Every success: { "data": { "id": "123", "subject": "Login broken" } }
// Every error:   { "error": { "code": "VALIDATION_FAILED", "message": "..." } }

Pagination — always — never return unbounded lists:

router.get(
  '/tickets',
  authenticate,
  async (req: AuthenticatedRequest, res: Response, next: NextFunction) => {
    try {
      const page = Math.max(1, parseInt(req.query.page as string) || 1);
      const pageSize = Math.min(
        100,
        parseInt(req.query.pageSize as string) || 20
      );
      const offset = (page - 1) * pageSize;

      const { rows: items, rowCount } = await pool.query(
        `SELECT * FROM ticket WHERE account_id = $1
       ORDER BY created_at DESC LIMIT $2 OFFSET $3`,
        [req.user.accountId, pageSize, offset]
      );

      const {
        rows: [{ count }],
      } = await pool.query(
        `SELECT COUNT(*) FROM ticket WHERE account_id = $1`,
        [req.user.accountId]
      );

      res.json({
        data: items,
        meta: {
          page,
          pageSize,
          total: parseInt(count),
          totalPages: Math.ceil(count / pageSize),
        },
      });
    } catch (error) {
      next(error);
    }
  }
);

Versioning from day one

// Version in the path — register routes under a versioned prefix from day one
app.use('/api/v1', v1Router);
// When breaking changes are needed later, add /api/v2 alongside — no migration pressure

Phase Six: Architecture Patterns — How Things Fit Together

Layered architecture — the foundation

The most common and most useful architectural pattern for backend services:

Request

Router        — maps URL + method to the right handler

Middleware     — auth, validation, logging, rate limiting

Route Handler  — HTTP concerns: parse, call service, format response

Service        — business logic: what the application actually does

Repository     — data access: SQL queries and database interactions

Database
// Route handler — HTTP concerns only, thin
router.post(
  '/tickets',
  authenticate,
  validateBody(createTicketSchema),
  async (req: AuthenticatedRequest, res: Response, next: NextFunction) => {
    try {
      const ticket = await ticketService.create(req.body, req.user.id);
      res.status(201).json({ data: ticket });
    } catch (error) {
      next(error);
    }
  }
);

// Service — business logic, no HTTP or database concerns
class TicketService {
  async create(dto: CreateTicketDto, userId: string): Promise<Ticket> {
    // Business rule: users can only have 10 open tickets at once
    const openCount = await ticketRepository.countOpen(userId);
    if (openCount >= 10) {
      throw new BusinessRuleError('Open ticket limit reached');
    }
    return ticketRepository.create({ ...dto, createdBy: userId });
  }
}

// Repository — raw SQL, no business logic
class TicketRepository {
  async countOpen(userId: string): Promise<number> {
    const { rows } = await pool.query(
      `SELECT COUNT(*) FROM ticket WHERE created_by = $1 AND status != 'closed'`,
      [userId]
    );
    return parseInt(rows[0].count, 10);
  }

  async create(data: Partial<Ticket>): Promise<Ticket> {
    const { rows } = await pool.query(
      `INSERT INTO ticket (subject, description, priority, account_id, created_by)
       VALUES ($1, $2, $3, $4, $5) RETURNING *`,
      [
        data.subject,
        data.description,
        data.priority,
        data.accountId,
        data.createdBy,
      ]
    );
    return rows[0];
  }
}

The global error handler — the most important Express pattern

Express’s global error handler is the central place where all thrown errors are formatted into HTTP responses. It must have four parameters — Express identifies it by the err argument:

// src/middleware/error.middleware.ts

export class NotFoundError extends Error {
  statusCode = 404;
  code = 'NOT_FOUND';
  constructor(message: string) {
    super(message);
    this.name = 'NotFoundError';
  }
}

export class BusinessRuleError extends Error {
  statusCode = 422;
  code = 'BUSINESS_RULE_VIOLATION';
  constructor(message: string) {
    super(message);
    this.name = 'BusinessRuleError';
  }
}

export class ValidationError extends Error {
  statusCode = 422;
  code = 'VALIDATION_FAILED';
  fields: Record<string, string>;
  constructor(message: string, fields: Record<string, string>) {
    super(message);
    this.name = 'ValidationError';
    this.fields = fields;
  }
}

// Registered last in app.ts — four params tells Express this is the error handler
export function errorHandler(
  err: Error,
  req: Request,
  res: Response,
  next: NextFunction // must be present even if unused
): void {
  logger.error({ message: err.message, stack: err.stack, path: req.path });

  if (err instanceof NotFoundError || err instanceof BusinessRuleError) {
    res.status((err as any).statusCode).json({
      error: { code: (err as any).code, message: err.message },
    });
    return;
  }

  if (err instanceof ValidationError) {
    res.status(422).json({
      error: { code: err.code, message: err.message, fields: err.fields },
    });
    return;
  }

  res.status(500).json({
    error: {
      code: 'INTERNAL_ERROR',
      message: 'An unexpected error occurred',
    },
  });
}

Validation middleware with zod

import { z } from 'zod';

export const createTicketSchema = z.object({
  subject: z.string().min(1).max(200),
  description: z.string().min(1).max(5000),
  priority: z.enum(['low', 'medium', 'high', 'critical']),
  accountId: z.string().uuid(),
});

export function validateBody<T>(schema: z.ZodSchema<T>) {
  return (req: Request, res: Response, next: NextFunction) => {
    const result = schema.safeParse(req.body);
    if (!result.success) {
      next(
        new ValidationError(
          'Invalid request body',
          result.error.flatten().fieldErrors as any
        )
      );
      return;
    }
    req.body = result.data;
    next();
  };
}

Environment configuration

// src/config/env.ts — validate at startup, fail loudly before the first request
function requireEnv(key: string): string {
  const value = process.env[key];
  if (!value) throw new Error(`Missing required environment variable: ${key}`);
  return value;
}

export const config = {
  port: parseInt(process.env.PORT || '3000', 10),
  databaseUrl: requireEnv('DATABASE_URL'),
  jwtSecret: requireEnv('JWT_SECRET'),
  redisUrl: requireEnv('REDIS_URL'),
  nodeEnv: process.env.NODE_ENV || 'development',
};

Logging — structured, not printf

import pino from 'pino';
export const logger = pino({
  level: config.nodeEnv === 'production' ? 'info' : 'debug',
});

// ❌ Not useful in production
console.log('Creating ticket for user ' + userId);

// ✅ Structured — queryable, correlatable across services
logger.info({
  event: 'ticket.create.started',
  userId,
  accountId,
  requestId: req.id,
});
logger.error({
  event: 'ticket.create.failed',
  userId,
  accountId,
  requestId: req.id,
  error: err.message,
});

Phase Seven: The Things That Separate Good from Great

These are the concepts that tutorials almost never cover and that senior backend developers take for granted.

Rate limiting and throttling

Any publicly accessible endpoint will be abused. Rate limiting is not optional in production:

import rateLimit from 'express-rate-limit';

// General API rate limit
const apiLimiter = rateLimit({
  windowMs: 60 * 1000, // 1 minute
  max: 100,
  standardHeaders: true,
  legacyHeaders: false,
  handler: (req, res) => {
    res.status(429).json({
      error: { code: 'RATE_LIMIT_EXCEEDED', message: 'Too many requests' },
    });
  },
});

// Stricter limit for auth — prevent brute force
const authLimiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 10,
});

app.use('/api/', apiLimiter);
app.use('/api/v1/auth/login', authLimiter);

Idempotency

An idempotent operation produces the same result no matter how many times it is called with the same input. In distributed systems, requests fail and retry. If your payment endpoint creates a charge every time it is called and a client retries due to a network timeout, you charge the user twice. Idempotency keys solve this:

router.post(
  '/payments',
  authenticate,
  async (req: AuthenticatedRequest, res: Response, next: NextFunction) => {
    try {
      const idempotencyKey = req.headers['x-idempotency-key'] as string;

      if (!idempotencyKey) {
        res.status(400).json({
          error: {
            code: 'MISSING_KEY',
            message: 'X-Idempotency-Key header required',
          },
        });
        return;
      }

      // Return the original result if we have already processed this key
      const cached = await redis.get(`idempotency:${idempotencyKey}`);
      if (cached) {
        res.json(JSON.parse(cached));
        return;
      }

      const result = await paymentService.charge(req.body);

      // Store for 24 hours
      await redis.setEx(
        `idempotency:${idempotencyKey}`,
        86400,
        JSON.stringify(result)
      );

      res.status(201).json(result);
    } catch (error) {
      next(error);
    }
  }
);

Graceful shutdown

// src/server.ts
const server = app.listen(config.port, () => {
  logger.info(`Server listening on port ${config.port}`);
});

process.on('SIGTERM', () => {
  logger.info('SIGTERM received — starting graceful shutdown');

  server.close(async () => {
    await pool.end(); // close database connection pool
    await redis.quit(); // close Redis connection
    logger.info('Graceful shutdown complete');
    process.exit(0);
  });

  // Force exit if graceful shutdown takes too long
  setTimeout(() => {
    logger.error('Graceful shutdown timed out — forcing exit');
    process.exit(1);
  }, 30_000);
});

Health checks

app.get('/health', async (req: Request, res: Response) => {
  const dbHealthy = await pool
    .query('SELECT 1')
    .then(() => true)
    .catch(() => false);
  const redisHealthy = await redis
    .ping()
    .then(() => true)
    .catch(() => false);

  const status = dbHealthy && redisHealthy ? 'healthy' : 'degraded';

  res.status(status === 'healthy' ? 200 : 503).json({
    status,
    timestamp: new Date().toISOString(),
    dependencies: {
      database: dbHealthy ? 'up' : 'down',
      redis: redisHealthy ? 'up' : 'down',
    },
  });
});

The Learning Path, Concretely

If I were starting this journey today, this is the order I would follow:

PhaseFocusWhat to Build
1HTTP fundamentals, Node.js event loopA raw HTTP server without Express — just Node’s http module
2Express, routing, middlewareA REST API for a domain you know — tickets, products, users
3SQL and databasesAdd a PostgreSQL database. No ORM yet — raw pg queries
4ORM layerAdd TypeORM or Prisma. Compare what it generates to your raw queries
5AuthenticationJWT-based auth. bcrypt for passwords. Protected routes
6Validation and error handlingRequest validation with zod. Global error handler middleware
7Redis and cachingAdd Redis. Cache one expensive query. Add rate limiting
8TestingUnit test your services. Integration test your routes with supertest
9DeploymentDocker. A CI/CD pipeline. Environment configuration
10Production concernsLogging with pino, health checks, graceful shutdown, idempotency

The thing that will accelerate this path more than anything else is writing real code against a real database with real data volumes. Spinning up a PostgreSQL database with a million rows of test data and watching what happens to your queries when you add and remove indexes teaches more in an afternoon than a week of tutorials.

What Frontend Gives You That Backend Developers Often Lack

I want to end here because I think it is genuinely important and not said enough.

Frontend developers who move to backend bring something that pure backend developers often undervalue: a visceral understanding of what it is like to consume the APIs they are building.

They know what it feels like when an API returns inconsistent error shapes and the client has to handle every response differently. They know the frustration of an API that forces the client to make three requests to assemble data that the server could have composed in one. They know the impact of a slow API endpoint on the user experience — because they have been on the other side, watching the loading spinner, wondering why the data is not there yet.

That empathy — for the client consuming the API — produces better API design. It produces response shapes that are genuinely useful rather than technically correct. It produces pagination that makes sense. It produces error messages that contain enough information to recover, not just enough to log.

You are not starting from zero. You are starting with a perspective that genuine backend specialists often have to be taught years into their career, because they have spent their whole career on the server side and have never had to feel what they produce.

Use that. It is more valuable than knowing which ORM to use.

Conclusion

Backend development is learnable. The concepts are not beyond the reach of a developer who has built complex frontend systems — the thinking patterns transfer more than most people assume. But the learning has to be honest about what it is actually teaching.

Following a tutorial that builds a todo API teaches you the syntax of Express. It does not teach you to think about concurrency, to understand what your database queries are doing, to design APIs that the clients consuming them will actually want to use, or to build systems that hold up when something goes wrong at two in the morning.

The mental model — the shift from single-user to multi-user thinking, from “does this work” to “what happens under load” — is the real learning. The specific syntax follows. The frameworks change. The concepts do not.

Start at the level below the framework. Write a raw HTTP server. Write raw SQL queries. Understand what the ORM is doing before you let it do it for you. And bring with you the knowledge you already have about what it feels like to be on the other side of the API.

That combination — platform understanding and consumer empathy — is rarer than any specific technology choice. And it is what will make you different.