Introduction
Every architectural pattern in software starts as a solution to a real problem. Microservices were a solution to the problem of large, monolithic backends that were slow to deploy, impossible to scale independently, and owned by a team too large for any one person to understand the whole thing. Microfrontends are the same idea applied to the frontend layer — and they carry the same trade-offs, the same complexity costs, and the same tendency to be adopted for the wrong reasons.
In this article, we will look into what microfrontends are, how to build them, where they work, where they fail, and the implementation approach that is still significantly underused despite being, in many cases, the best one available.
What Is A Microfrontend
A microfrontend is an independently deployable piece of a frontend application, owned by an independent team, that is composed with other microfrontends at runtime to form the complete application that the user sees.
The key words are independently deployable and independently owned. If two pieces of your frontend always deploy together and are owned by the same team, they are not microfrontends — they are modules in a monolith. The distinction matters because the entire justification for microfrontend complexity rests on the deployment independence and team ownership model. Without those, you are paying the costs without receiving the benefits.
Monolithic Frontend:
One repository → one build → one deployment → one application
Microfrontend Architecture:
Team A repository → Team A build → Team A deployment ↘
Team B repository → Team B build → Team B deployment → Composed Application
Team C repository → Team C build → Team C deployment ↗
The shell application (sometimes called the host or container) orchestrates the composition — it decides which microfrontends to load, where to render them, and how to route between them. Each microfrontend is responsible for a specific domain of the application and knows nothing about the shell or other microfrontends beyond a shared contract.
Why They Exist
The problem microfrontends solve is an organizational one, not a technical one. This is the insight most articles about microfrontends bury in a footnote when it should be the headline.
When a frontend codebase reaches a certain size — typically when it is owned by more than four or five developers who are organized into distinct teams with distinct product domains — several problems emerge:
Deployment coupling. Team A finishes a feature on Tuesday. It cannot be deployed because Team B has a half-finished feature in the same codebase that is not ready. Teams block each other on deployment cadence regardless of whether their code is related.
Merge conflicts and code ownership ambiguity. With multiple teams working in the same repository, merge conflicts become common and the question of “who owns this component” is frequently unclear. Code reviews cross team boundaries, slow down, and create friction.
Scaling the team is hard. Onboarding a new developer requires understanding the entire application rather than one domain. The blast radius of any change is potentially the entire application.
Technology lock-in. The entire frontend is committed to the framework chosen at the project’s inception. Migrating or experimenting with alternatives requires migrating everything simultaneously.
These are real problems at scale. If your team is five people who all know the whole codebase, microfrontends are not solving a problem you have — they are introducing problems you did not have. If your team is fifty people across five product domains who need to ship independently, they are solving a real problem.
The Implementation Approaches
There are several ways to implement microfrontends, each with different trade-offs. Understanding all of them is important before choosing one.
Build-time integration (npm packages)
The simplest form. Each microfrontend is published as an npm package. The shell application imports them as dependencies.
// shell/package.json
{
"dependencies": {
"@company/tickets-mfe": "^2.3.0",
"@company/accounts-mfe": "^1.8.0",
"@company/monitoring-mfe": "^3.1.0"
}
}
// shell/src/app/app.module.ts
import { TicketsModule } from '@company/tickets-mfe';
import { AccountsModule } from '@company/accounts-mfe';
@NgModule({
imports: [TicketsModule, AccountsModule],
})
export class AppModule {}
The honest trade-off: This is not really runtime composition. When the shell deploys, it has already bundled the specific versions of each microfrontend that were installed. Updating a microfrontend requires updating the shell’s dependencies and redeploying the shell. Team A and Team B are still coupled at deployment time, just through package versions rather than a shared codebase.
Use this when you want code ownership separation without true deployment independence. It is the simplest approach and appropriate for teams that want isolated development without the operational complexity of runtime composition.
Runtime integration via iframes
The oldest approach and the most isolated. Each microfrontend is a separate application hosted at a separate URL. The shell embeds them in iframes.
<!-- Shell renders this for the tickets domain -->
<iframe
src="https://tickets.internal.company.com"
style="width: 100%; height: 100%; border: none;"
title="Ticket Management"
>
</iframe>
Where it works: When complete isolation is the requirement — separate security contexts, no shared JavaScript, no possibility of one microfrontend affecting another. Government and financial services applications where different domains have genuinely different security requirements legitimately use this.
Where it fails: Everywhere else. Iframes cannot share state with the shell or with each other without postMessage. Deep linking and browser history integration require careful coordination. Accessibility across iframe boundaries is genuinely hard. Performance suffers because each iframe is a separate browser context. The user experience often feels fragmented in ways that are difficult to solve without significant engineering effort.
Iframes are the right tool for embedding genuinely third-party content — payment forms, maps, external widgets. They are a poor tool for composing parts of your own application.
Runtime integration via JavaScript
Each microfrontend is built into a JavaScript bundle that exposes a specific interface. The shell loads the bundle at runtime and calls the mount function.
// Team A builds this bundle — accounts-mfe.js
// It exposes a lifecycle interface
window.AccountsMFE = {
mount(containerId, config) {
const container = document.getElementById(containerId);
// Render the accounts microfrontend into this container
ReactDOM.render(<AccountsApp config={config} />, container);
},
unmount(containerId) {
const container = document.getElementById(containerId);
ReactDOM.unmountComponentAtNode(container);
},
};
// Shell loads and mounts it
async function loadMicrofrontend(name: string, containerId: string) {
// Load the bundle dynamically
await loadScript(`https://cdn.company.com/${name}/latest/bundle.js`);
// Call the mount lifecycle
window[`${name}MFE`].mount(containerId, {
user: currentUser,
theme: currentTheme,
});
}
This approach works but it is fragile. Global namespace pollution, no TypeScript types across the boundary, manual lifecycle management, no dependency sharing — the bundle for React is loaded once per microfrontend even if they all use the same version.
It was the dominant approach before Webpack 5’s Module Federation made a better one available.
Module Federation — the right approach for most teams
Webpack 5 introduced Module Federation in 2020 and it is the most underused significant frontend architecture advancement in the last few years. Module Federation allows one Webpack build to dynamically load code from another Webpack build at runtime — including sharing dependencies so that React or Angular is not bundled multiple times.
// accounts-mfe/webpack.config.js — this is a "remote"
const ModuleFederationPlugin = require('webpack/lib/container/ModuleFederationPlugin');
module.exports = {
plugins: [
new ModuleFederationPlugin({
name: 'accountsMFE',
filename: 'remoteEntry.js', // the manifest file the shell fetches
// What this MFE exposes to the outside world
exposes: {
'./AccountsApp': './src/AccountsApp',
'./ProfileWidget': './src/components/ProfileWidget',
},
// Dependencies to share — loaded once, used by all MFEs
shared: {
react: { singleton: true, requiredVersion: '^18.0.0' },
'react-dom': { singleton: true, requiredVersion: '^18.0.0' },
'@company/design-system': { singleton: true },
},
}),
],
};
// shell/webpack.config.js — this is the "host"
module.exports = {
plugins: [
new ModuleFederationPlugin({
name: 'shell',
// Where to find the remote MFEs
remotes: {
accountsMFE: 'accountsMFE@https://accounts.company.com/remoteEntry.js',
ticketsMFE: 'ticketsMFE@https://tickets.company.com/remoteEntry.js',
},
shared: {
react: { singleton: true, requiredVersion: '^18.0.0' },
'react-dom': { singleton: true, requiredVersion: '^18.0.0' },
'@company/design-system': { singleton: true },
},
}),
],
};
// Shell imports from the remote — looks like a local import, loads at runtime
const AccountsApp = React.lazy(() => import('accountsMFE/AccountsApp'));
function App() {
return (
<Router>
<Route path="/accounts/*">
<Suspense fallback={<LoadingSpinner />}>
<AccountsApp />
</Suspense>
</Route>
</Router>
);
}
What Module Federation gives you:
True runtime composition — the AccountsMFE bundle is loaded from its own CDN when the user navigates to /accounts. Team A can deploy a new version of AccountsMFE and every user gets it on their next navigation, without the shell redeploying.
Dependency sharing — React is loaded once. The design system is loaded once. The shared configuration ensures singletons are respected across module boundaries.
TypeScript across boundaries — with the right tooling (Federation type declarations or manual type sharing), you get type safety at the integration points.
Genuine deployment independence — Team A merges and deploys to their CDN. Team B does the same. The shell does not need to know about either deployment. This is the actual promise of microfrontends, delivered properly.
Native Federation — Module Federation without Webpack
For Angular teams specifically, @angular-architects/native-federation deserves attention. It implements the Module Federation concepts using ES modules and import maps rather than Webpack’s proprietary format, which means it works with any bundler — esbuild, Vite, Rollup — and produces standards-based output that browsers can eventually support natively.
// federation.config.ts — accounts-mfe
import {
withNativeFederation,
shareAll,
} from '@angular-architects/native-federation/config';
export default withNativeFederation({
name: 'accountsMFE',
exposes: {
'./Module': './src/app/accounts/accounts.module.ts',
},
shared: {
...shareAll({
singleton: true,
strictVersion: true,
requiredVersion: 'auto',
}),
},
});
The benefit: you are not locked into Webpack. As the Angular ecosystem moves toward esbuild-based builds (the default in Angular 17+), Native Federation keeps your microfrontend architecture compatible without requiring you to maintain a Webpack configuration in a non-Webpack project.
The Communication Problem — and How to Solve It
The question every microfrontend architecture eventually has to answer is: how do the microfrontends talk to each other?
This question reveals a lot about whether the architecture is well-designed. Microfrontends that talk to each other extensively are not genuinely independent — they have just moved their coupling from build time to runtime. The goal is minimal, well-defined communication across the boundary.
What should cross the boundary
Almost nothing. The ideal microfrontend receives its configuration from the shell (current user, theme, locale, initial route) and is otherwise self-contained. It fetches its own data, manages its own state, and renders its own UI.
// The shell's contract with each microfrontend — keep it minimal
interface MicrofrontendConfig {
user: {
id: string;
name: string;
role: string;
permissions: string[];
};
theme: 'light' | 'dark';
locale: string;
apiBaseUrl: string;
}
A shared event bus — for the cases that genuinely need it
When microfrontends need to communicate without knowing about each other, a shared event bus is the cleanest approach. It is the same observer pattern you would use in any event-driven system, exposed through a shared singleton:
// @company/shell-events — a shared package with no dependencies
// Published once, imported by any MFE that needs it
type EventMap = {
'user:updated': { userId: string; changes: Partial<User> };
'cart:item-added': { productId: string; quantity: number };
'navigation:route-changed': { from: string; to: string };
};
class EventBus {
private static instance: EventBus;
private listeners = new Map<string, Set<Function>>();
static getInstance(): EventBus {
if (!EventBus.instance) {
EventBus.instance = new EventBus();
}
return EventBus.instance;
}
on<K extends keyof EventMap>(
event: K,
listener: (data: EventMap[K]) => void
): () => void {
if (!this.listeners.has(event)) {
this.listeners.set(event, new Set());
}
this.listeners.get(event)!.add(listener);
// Return unsubscribe function
return () => this.listeners.get(event)?.delete(listener);
}
emit<K extends keyof EventMap>(event: K, data: EventMap[K]): void {
this.listeners.get(event)?.forEach((listener) => listener(data));
}
}
export const eventBus = EventBus.getInstance();
// AccountsMFE emits when the user updates their profile
eventBus.emit('user:updated', { userId: user.id, changes: { name: newName } });
// Shell or other MFEs react to it
const unsubscribe = eventBus.on('user:updated', ({ userId, changes }) => {
updateLocalUserCache(userId, changes);
});
The type safety here is important. The EventMap type ensures that both the emitter and the listener agree on the shape of the data. This is the shared contract across the team boundary — it should be versioned, documented, and changed carefully.
What to avoid
Shared state stores across boundaries. Redux or NgRx state stores should not be shared between microfrontends. When Team A’s microfrontend reads from Team B’s store, they have created an invisible coupling that will cause subtle bugs when Team B refactors their state shape.
Direct imports between microfrontends. If AccountsMFE imports a component from TicketsMFE, they are not independent — they are a distributed monolith. The only legitimate imports between microfrontends are the shared packages that both teams have agreed to maintain together.
Deep linking without coordination. Navigation is a shared concern. The shell owns the routing. Microfrontends should not be navigating to routes they do not own.
The Costs Nobody Talks About Enough
Every article about microfrontends tells you the benefits. Here are the costs that only become visible after you have lived with the architecture for a year.
Distributed complexity
A monolith has one build pipeline, one deployment, one set of logs, one performance budget. A microfrontend architecture has as many of each as you have microfrontends. When something goes wrong in production, you are debugging across multiple deployments, multiple log streams, and multiple teams.
On a project with six microfrontends, a user-reported bug that would have taken thirty minutes to find in a monolith can take half a day to trace through the distributed system. The shell loaded the wrong version of an MFE, which fetched data from an API that was behind by a deployment, which caused a race condition with a shared event that another MFE was listening to. None of those facts are visible in any single log stream.
Version management overhead
With Module Federation, Team A’s microfrontend and Team B’s microfrontend both depend on your design system. They are supposed to share it as a singleton. But Team A has upgraded to design system v3.0 and Team B is still on v2.8 because they have not had time to migrate.
Now you have a version conflict in a shared singleton. The behavior is undefined. One of them will “win” depending on load order, and the one that loses will behave incorrectly in ways that are confusing to debug.
Version alignment across teams requires coordination that partially defeats the independence the architecture was meant to provide.
// The federation config that avoids hard conflicts
shared: {
'@company/design-system': {
singleton: true,
strictVersion: false, // don't crash on version mismatch
requiredVersion: '>=2.0.0', // accept any version 2.x or higher
},
}
// But now you have to be careful that your components
// work across the version range you've declared
Initial load performance
A microfrontend architecture requires multiple network requests at load time that a monolith bundles into one. The shell loads, fetches the remote entry manifests, resolves dependencies, downloads the MFE bundles. On a fast connection this is imperceptible. On a slow mobile connection or in a region far from your CDN, it is noticeable.
This is solvable — preloading, prefetching, aggressive caching, good CDN configuration — but it requires deliberate attention that a monolith does not.
// Prefetch upcoming route MFEs after the initial render
// so the user doesn't wait when they navigate
if ('requestIdleCallback' in window) {
requestIdleCallback(() => {
// When the browser is idle, preload the tickets MFE
// so it's ready when the user navigates there
import('ticketsMFE/TicketsApp');
});
}
The testing pyramid becomes a pyramid range
Unit tests are easy — each microfrontend tests itself in isolation. Integration tests are harder — you need to test the microfrontend in the context of the shell. End-to-end tests are harder still — you need the full composition running, which means all microfrontends need to be in a deployable state simultaneously.
This is particularly painful during a breaking change to a shared contract. If the shell changes the MicrofrontendConfig interface, every MFE needs to update simultaneously before the end-to-end tests pass again. Independence at deployment time does not mean independence at testing time.
When Microfrontends Are and Are Not the Right Answer
Be honest about this before you start.
Microfrontends are probably the right answer when:
- You have more than four or five frontend teams, each owning a distinct product domain
- Teams are genuinely blocked by each other’s deployment schedules on a regular basis
- Different domains have legitimately different technology requirements (one team wants React, another has deep Angular expertise)
- The application is large enough that a new developer cannot reasonably understand the whole thing
- Independent deployment cadences — some domains ship daily, others ship monthly — are a genuine business requirement
Microfrontends are probably not the right answer when:
- You have one team that maintains the whole frontend
- Your monolith is slow to deploy because of CI/CD problems, not team coupling
- You are a startup that wants to “keep options open” for future team scaling
- The domains in your application are deeply interdependent and communicate constantly
- The team adopting them has no prior experience and is learning on a production system
- You read a conference talk and got excited
The most expensive mistake is teams adopting microfrontends as an aspiration — “we will grow into needing this” — rather than as a solution to a problem they currently have. The complexity is real and immediate. The benefits are only real if the organizational problems they solve are also real and immediate.
A Practical Implementation Pattern
For teams that have validated the need and are choosing Module Federation, here is a recommended architecture as a starting point.
Directory structure
company-frontend/
├── shell/ # Host application — routing, auth, shell UI
├── mfe-accounts/ # Accounts domain — owned by Team A
├── mfe-tickets/ # Tickets domain — owned by Team B
├── mfe-monitoring/ # Monitoring domain — owned by Team C
├── packages/
│ ├── design-system/ # Shared UI components — owned by Platform team
│ ├── shell-events/ # Shared event bus contract
│ ├── shared-types/ # TypeScript interfaces shared across boundaries
│ └── auth-utils/ # Authentication utilities shared across MFEs
└── tools/
├── federation-config/ # Shared Module Federation configuration helpers
└── mfe-testing/ # Utilities for testing MFEs in isolation
The shell’s responsibilities
// The shell does five things:
// 1. Authentication — everyone uses the same auth session
// 2. Routing — decides which MFE handles which routes
// 3. Global layout — the nav, the header, the footer
// 4. Config distribution — passes user/theme/locale to MFEs
// 5. Error boundaries — catches MFE failures gracefully
@Component({
selector: 'app-shell',
template: `
<app-nav [user]="currentUser$ | async"></app-nav>
<main>
<router-outlet></router-outlet>
</main>
`,
})
export class ShellComponent {
currentUser$ = this.authService.currentUser$;
}
// Routes in the shell — each loads a remote MFE
const routes: Routes = [
{
path: 'accounts',
loadChildren: () =>
loadRemoteModule({
type: 'module',
remoteEntry: environment.mfeUrls.accounts + '/remoteEntry.js',
exposedModule: './Module',
}).then((m) => m.AccountsModule),
},
{
path: 'tickets',
loadChildren: () =>
loadRemoteModule({
type: 'module',
remoteEntry: environment.mfeUrls.tickets + '/remoteEntry.js',
exposedModule: './Module',
}).then((m) => m.TicketsModule),
},
];
The error boundary that saves you
If a microfrontend fails to load — network error, deployment issue, version conflict — the entire application should not crash.
// React error boundary around each MFE mount point
class MicrofrontendErrorBoundary extends React.Component<
{ name: string; fallback: React.ReactNode; children: React.ReactNode },
{ hasError: boolean; error: Error | null }
> {
state = { hasError: false, error: null };
static getDerivedStateFromError(error: Error) {
return { hasError: true, error };
}
componentDidCatch(error: Error, info: React.ErrorInfo) {
// Log to your monitoring service
logger.error(`MFE ${this.props.name} failed to render`, {
error: error.message,
componentStack: info.componentStack,
});
}
render() {
if (this.state.hasError) {
return this.props.fallback;
}
return this.props.children;
}
}
// Usage
<MicrofrontendErrorBoundary
name="accounts"
fallback={<ServiceUnavailableMessage domain="Accounts" />}
>
<Suspense fallback={<LoadingSpinner />}>
<AccountsApp config={config} />
</Suspense>
</MicrofrontendErrorBoundary>
The Approach That Is Still Underused: Import Maps
Import maps are a web standard that shipped in all major browsers and almost nobody is using in their microfrontend architecture. They deserve more attention than they get.
An import map is a JSON document that tells the browser how to resolve bare module specifiers to URLs:
<!-- index.html of the shell -->
<script type="importmap">
{
"imports": {
"react": "https://cdn.company.com/react/18.2.0/react.js",
"react-dom": "https://cdn.company.com/react/18.2.0/react-dom.js",
"@company/design-system": "https://cdn.company.com/ds/3.1.0/index.js",
"accountsMFE/": "https://accounts.company.com/mfe/2.3.0/",
"ticketsMFE/": "https://tickets.company.com/mfe/1.8.0/"
}
}
</script>
// Any script on the page can now do this
// and the browser resolves it using the import map
import AccountsApp from 'accountsMFE/AccountsApp';
import { Button } from '@company/design-system';
// React is shared automatically — the browser only loads it once
Why this matters:
The import map is the single source of truth for every URL in your microfrontend composition. To update AccountsMFE from version 2.3 to version 2.4, you change one line in the import map. No shell rebuild. No Webpack configuration change. You update the map and every user gets the new version.
More importantly: the import map is a deployment artefact that can be generated and deployed independently of every other part of the system. A deployment pipeline that only updates the import map takes seconds rather than minutes, and the change is atomic — every user either gets the old map or the new one, with no partial states.
The combination of Module Federation for development experience and tooling, plus import maps for production deployment control, is the architecture I would build a serious microfrontend system on today.
Current limitation: tooling support for import maps in complex scenarios is still maturing. @angular-architects/native-federation supports them. The broader ecosystem is catching up. This is why it is underused — not because it is technically wrong, but because the tooling was not ready until recently and developers who set up their microfrontend architecture two or three years ago have not revisited the decision.
What the Best Microfrontend Architecture Looks Like
After going through all the options, here is what I would build today for a team that has validated the need:
Shell (host):
- Angular 17+ with esbuild
- Native Federation for MFE loading
- Import map for production version control
- Auth, global layout, routing
MFEs (remotes):
- Each independently deployable to their own CDN path
- Each exposes a single NgModule or React component tree
- Each owns its own data fetching
- Each subscribes to the shared event bus for cross-domain events
- Each has its own Storybook for isolated development
Shared packages (not MFEs):
- Design system — UI components, tokens, typography
- Shell events — event bus contract and types
- Shared types — TypeScript interfaces for cross-boundary contracts
- Auth utils — token reading, permission checking
CI/CD:
- Each MFE has its own pipeline
- Deployment updates the import map atomically
- End-to-end tests run against a composed environment
- Contract tests verify shared interfaces haven't broken
Conclusion
Microfrontends are the right architecture for a specific class of problem — large applications, multiple teams, genuine deployment independence requirements. They are a poor choice for everything else, and the industry’s enthusiasm for the pattern has led to many teams adopting complexity they did not need.
When they are the right choice, the implementation approach matters enormously. The self-executing bundle approach and iframes solve the independence problem while creating new problems with dependency management and user experience. Module Federation solves the dependency problem. Native Federation solves the build tool coupling problem. Import maps solve the deployment control problem. The combination of the last three is the architecture that gives you the actual benefits of microfrontends — independent deployment, shared dependencies, clean team boundaries — without the costs that earlier approaches imposed.
The real work in a microfrontend architecture is not the technical implementation. The technical implementation, with modern tooling, is manageable. The real work is defining the team boundaries cleanly enough that the microfrontends are genuinely independent, establishing the shared contracts carefully enough that integration is reliable, and resisting the temptation to couple things together at runtime that were supposed to be independent.
Get the team structure right first. The architecture will follow.