How to apply the strangler fig pattern to incrementally migrate a Java monolith — routing traffic, extracting bounded contexts, and keeping the system running throughout the migration.
The strangler fig pattern — named after a vine that grows around a tree until the tree is entirely replaced — is the safest approach to migrating a monolith to services. You extract functionality incrementally, routing some traffic to new services while the monolith handles the rest, until nothing remains in the monolith. At no point do you stop the world for a big-bang rewrite.
Rather than replacing the monolith all at once, build a facade that sits in front of it. New service implementations are added behind the facade. Traffic is routed to new services as they become ready, and to the monolith for everything else. When all routes point to new services, the monolith is strangled — it no longer receives production traffic and can be decommissioned.
Client → Facade → [new OrderService] (extracted)
→ [new MarketService] (extracted)
→ [Monolith] (remaining)
Spring Cloud Gateway is a natural fit for the facade — it routes HTTP traffic and supports per-route predicates:
spring:
cloud:
gateway:
routes:
- id: orders-service
uri: http://order-service:8080
predicates:
- Path=/api/orders/**
filters:
- StripPrefix=1
- id: markets-service
uri: http://market-service:8080
predicates:
- Path=/api/markets/**
- id: monolith-fallback
uri: http://monolith:8080
predicates:
- Path=/api/**
order: 999 # lowest priority — catch-all
The monolith receives everything not yet extracted. When a new service is ready, add a higher-priority route. The monolith’s route remains as the fallback.
Extract bounded contexts with clear boundaries and high change frequency first:
// Good first candidate: self-contained, high churn, clear API
public class OrderService {
public Order placeOrder(PlaceOrderRequest request) { ... }
public Order cancelOrder(String orderId) { ... }
public List<Order> getOrdersByMarket(String marketId) { ... }
}
// Poor first candidate: deeply entangled, shared data, unclear boundary
public class ReportService {
// reads from 12 tables, shared with 5 other services, no clear owner
}
Use coupling analysis (dependency graphs, database join frequency) to identify natural seams.
The monolith has a single database. Extracted services need their own data stores to achieve true independence. The path:
Phase 1 — shared database: The new service reads and writes the same tables as the monolith. No data migration yet, but the service is deployed independently.
// New service uses the same datasource initially
@Bean
@ConfigurationProperties("spring.datasource.shared")
public DataSource sharedDataSource() {
return DataSourceBuilder.create().build();
}
Phase 2 — strangler with sync: The new service has its own database, but a synchronisation mechanism keeps it in sync with the monolith database. Use the outbox pattern or CDC (Change Data Capture with Debezium).
Phase 3 — ownership transfer: Traffic routes exclusively to the new service. The monolith no longer writes to those tables. The sync can be removed.
Rather than a binary “old” vs “new” route, use feature flags for canary traffic shifting:
@Component
public class OrderRoutingFilter implements GatewayFilter {
@Autowired
private LaunchDarklyClient featureFlags;
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
String userId = extractUserId(exchange.getRequest());
if (featureFlags.boolVariation("use-new-order-service", userId, false)) {
// Forward to new service
exchange.getAttributes().put(GATEWAY_REQUEST_URL_ATTR,
URI.create("http://order-service:8080" + exchange.getRequest().getPath()));
}
// else fall through to monolith
return chain.filter(exchange);
}
}
Start with 1% of users on the new service, monitor error rates and latency, then gradually increase to 100%.
The monolith’s domain model is often entangled and inconsistent. When the new service reads data from the monolith (via API or database), translate it through an anti-corruption layer to avoid carrying monolith design decisions into the new service:
@Component
public class MonolithOrderAdapter {
public Order fromMonolithResponse(MonolithOrderDto dto) {
return Order.builder()
.id(new OrderId(dto.getOrderReference())) // rename + type wrapping
.marketId(new MarketId(dto.getBfMarketId()))
.price(Price.of(dto.getRequestedOdds())) // semantic naming
.status(OrderStatus.fromLegacyCode(dto.getStatus()))
.build();
}
}
This keeps the new service’s domain model clean, even if the monolith’s data is messy.
Before decommissioning the monolith’s handling of a route, run shadow mode: send traffic to both the monolith and the new service, log both responses, but return only the monolith’s response to the client. Compare the responses asynchronously to find discrepancies before the cutover.
@Component
public class ShadowModeFilter implements GatewayFilter {
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
// Send request to new service asynchronously
shadowClient.sendAsync(exchange.getRequest())
.thenAccept(newResponse -> compareResponses(exchange, newResponse));
// Return monolith's response as normal
return chain.filter(exchange);
}
}
Shadow mode gives you confidence before each traffic shift.
The strangler fig is not glamorous. It is slow, incremental, and requires discipline to avoid the temptation of the big-bang rewrite. It is also the only migration strategy with a consistent track record of success.
If you’re planning a monolith migration in Java and want help with the architectural approach, get in touch.