Strangler Fig Pattern Explained: The Safer Path from Monolith to Microservices

It’s Tuesday stand up. Someone asks about a small UI fix. A senior engineer sighs — “We can’t. The payment team has a release Thursday, we share the same pipeline.”
Everyone nods. Nobody is surprised.
That’s life inside a monolith that’s outgrown itself. Not big dramatic failures, just constant friction. A tiny change breaks something unrelated. A migration locks the whole system. A one-line hotfix needs three teams and a 2 AM deployment window.
The system works. It got you here. But now it’s the thing slowing everyone down.
So the idea comes up: let’s just rewrite the whole thing. And that’s where it gets dangerous. Full rewrites almost always take 2–3x longer than planned. By the time you’re done, half the edge cases and business rules from the old system are missing — because nobody documented them. They were just quietly baked into the code over years.
You don’t need to blow it all up. You just need a smarter way to replace it, piece by piece, while it’s still running.
That’s exactly what the Strangler Fig Pattern is for.
Full rewrites almost always fail. The Strangler Fig Pattern is the safer alternative.
You incrementally extract services from the monolith, redirect traffic via an API gateway, and let the monolith shrink over time.
The application logic decomposition is the easy part — the database is where migrations stall.
Start with low-risk, low-coupling seams. Leave the core transactional flows for later.
Operational readiness (observability, CI/CD, routing) must come before the first extraction.
The pattern only works if you actively decommission monolith code after each extraction.
1. The Problem with Big Bang Rewrites
Let’s be direct: full system rewrites almost always fail. Not because the engineers aren’t talented or the architecture isn’t sound — but because you’re trying to recreate years of accumulated business logic, edge case handling, and institutional knowledge while simultaneously keeping the lights on.
Joel Spolsky’s warning still holds: the full rewrite is the single worst strategic mistake a software organization can make. The old system, messy as it is, contains years of bug fixes encoding business rules nobody documented. The new system doesn’t. And by the time the rewrite is done — if it ever finishes — the business has moved on, requirements have shifted, and you’ve spent 18 months shipping nothing.
The Strangler Fig Pattern is the architectural answer to this problem. It offers a migration path that is incremental, reversible, and business-friendly.
2. What Is the Strangler Fig Pattern?
The pattern is named after the strangler fig tree found in rainforests. The fig starts as a seed deposited on the branch of a host tree. It grows downward, wrapping roots around the host trunk, gradually encasing it. Eventually, the host tree dies and decomposes, and the fig stands on its own — having used the original tree as its scaffold throughout.
Martin Fowler formalized this analogy for software migration. The idea is straightforward: rather than replacing the monolith in one shot, you incrementally build new services alongside it, redirect traffic one domain slice at a time, and let the monolith wither until it can be decommissioned entirely.
The Strangler Fig is not a microservices pattern. It is a migration strategy.
The destination can be microservices, a modular monolith, or any target architecture. The pattern is about the journey, not the destination.
To make this concrete, imagine an e-commerce platform — let’s call it ShopCore — a monolith handling everything from product catalog to order management to payments. We’ll use ShopCore as a running example throughout.
3. Migration Strategy and Phasing
Successful strangler migrations follow disciplined phasing. The instinct to extract the “most important” or “most painful” service first is usually wrong. Start with services that have clear boundaries, low database coupling, and tolerable business risk.
Phase 0: Prepare the Soil
Before extracting a single service, invest in operational readiness. You need:
Centralized logging and observability
Distributed tracing (even basic correlation IDs go a long way early)
A CI/CD pipeline capable of deploying services independently
An API gateway or reverse proxy you can configure to route traffic
Without these foundations, you will be debugging distributed systems in the dark. Teams that skip Phase 0 often end up with more operational complexity than the monolith ever had.
Phase 1: Introduce the Facade
Insert an API gateway (Kong, AWS API Gateway, NGINX, Envoy) in front of the monolith. Initially, 100% of traffic passes through to the monolith unchanged. This is a zero-risk step that proves the routing layer works and builds team confidence in the infrastructure.
For ShopCore, this means all requests — /products, /orders, /checkout — continue to hit the monolith, just via the new gateway. No user-facing change whatsoever.
Phase 2: Identify and Prioritize Seams
This is where architectural judgment matters most. You’re looking for functional seams — areas where you can draw a domain boundary and say “this is distinct.” Domain-Driven Design concepts like bounded contexts and aggregates are useful tools here.
Good first candidates share these characteristics:
Clearly defined inputs and outputs with the rest of the system
Communication via well-understood interfaces (not JOINs across domain tables)
Read-heavy traffic, which simplifies data consistency concerns
Graceful degradation if the service fails — not a catastrophic dependency
For ShopCore, the Product Catalog is an ideal first extraction. It’s read-heavy, relatively self-contained, and if it has an outage, users can’t browse — but they can still complete orders already in their basket. Low blast radius.
Authentication, notification dispatch, and reporting/analytics are other common early wins. Core transactional flows like order processing or payment handling should come later, after the team has built confidence.
Phase 3: Extract, Redirect, Verify, Decommission
For each selected seam, the life cycle is:
Build the new service with its own runtime, deployment pipeline, and (eventually) its own data store.
Canary — route a small percentage of traffic to the new service via the gateway (1% → 10% → 50% → 100%). Monitor closely.
Cut over — once confident, shift 100% of traffic to the new service.
Decommission — remove the corresponding code from the monolith. Do not leave dead code in place. It creates confusion and the temptation to “just fix it in the monolith” when something goes wrong.
Celebrate. The monolith is smaller.
Phase 4: Repeat Until Done
Each extraction makes the next one easier — but the remaining code is often more entangled (you extracted the easy parts first). Budget accordingly. Late-stage extractions take longer and require more care.
4. Data Management: The Hardest Part
Experienced architects will tell you that decomposing application logic is the easy part. The database is where migrations stall.
The Shared Database Problem
Most monoliths are built around a single relational database that multiple modules access directly — often with complex JOINs across domain boundaries. Microservices doctrine says each service should own its data, but you cannot cut over to that model on day one.
The pragmatic intermediate state is shared database, separate schemas: the new service reads and writes to the same physical database as the monolith, but in a dedicated schema that only the new service touches. This avoids distributed transactions while still giving you a deployment boundary.
For ShopCore’s Product Catalog service, this means creating a catalog schema in the existing database, migrating the relevant tables there, and ensuring the monolith accesses catalog data via the new service's API rather than directly via SQL.
Over time, you migrate to a fully separate database. That transition is where the real work is.
The Synchronization Challenge
During transition, both the monolith and the new service may need to access the same data. There are several approaches:
Dual Write — The monolith (or an intermediary layer) writes to both the old store and the new service’s store. You verify consistency before cutting over reads. Risk: dual writes are complex to implement correctly and can create partial failure states.
Change Data Capture (CDC) — You capture changes from the monolith’s database as a stream of events (via tools like Debezium on PostgreSQL or MySQL) and replay them into the new service’s store. The new service stays current without the monolith knowing it exists. This is the more robust pattern for high-traffic systems, though it adds infrastructure complexity.
Synchronous API Calls — The new service calls back to the monolith for data it doesn’t yet own. Simplest to implement, but creates temporal coupling: the new service can’t function if the monolith is unavailable. Use only as a short-term bridge.
Eventual Consistency
Moving from a relational monolith to distributed services means accepting eventual consistency in some workflows. Make these decisions explicitly: which operations require strong consistency (keep those co-located), and which can tolerate eventual consistency (let those cross service boundaries).
Attempting to enforce distributed transactions without deep experience in sagas or two-phase commit adds significant complexity. If you find yourself needing this, question whether the domain boundary you’ve drawn is actually correct.
Rule: A table should be written to by exactly one service. If the monolith and a new service are both writing to the same table, you have not created a real boundary — you’ve created a distributed monolith with the downsides of both architectures.
5. Real-World Pitfalls and Lessons Learned
Most strangler migrations don’t fail because of bad architecture. They fail because of the same mistakes, made over and over. Here’s what to watch out for.
Extracting too early. Teams rush to “show progress” before observability, CI/CD, and routing are solid. A poorly operated microservice is worse than a monolith. Phase 0 is not optional.
Cutting along technical lines. Pulling out “all image processing logic” sounds clean but creates highly coupled services. Cut along domain boundaries, not technical ones — even if it feels messier.
Not deleting monolith code. Extracting a service but leaving the old code “just in case” gives you two sources of truth. Decommissioning is the whole point — don’t skip it.
Ignoring cross-cutting concerns. Auth, rate limiting, tracing — these need to work consistently across both systems from day one, not bolted on later.
Team and service boundaries don’t match. If one team owns the monolith and new services simultaneously, it’s chaos. Let Conway’s Law work for you, not against you.
Underestimating dark matter. Every monolith has code nobody fully understands. Write a characterization test suite before you start extracting — you’ll thank yourself later.
No rollback plan. Reversibility is the pattern’s biggest strength. Keep the monolith code alive during transition and test your rollback path before every extraction. “We can roll back in 10 minutes” is what keeps leadership on your side.
6. Final Thoughts
The Strangler Fig Pattern is not glamorous. It is slow, disciplined, unglamorous work — the kind that separates engineers who can operate at scale from those who can only build from scratch.
The monolith you’re starting with was not built in a day. It will not be replaced in one either. Respect the system that got you here, extract carefully, and let it wither gracefully.
Please like ❤️, share ✉, and subscribe 👍 to my blog for more helpful insights. Stay tuned for more updates.





