Skip to main content
  • Company
    • About Us
    • Projects
    • Startup Lab
    • AI Solutions
    • Security Expertise
    • Contact
  • Knowledge
    • Blog
    • Research
hello@horizon-dynamics.tech
Horizon Dynamics
  1. Home
  2. Blog
  3. Legacy system modernization playbook
Company
  • About Us
  • Projects
  • Startup Lab
  • AI Solutions
  • Security Expertise
  • Contact
Contact Ushello@horizon-dynamics.tech
Horizon Dynamics
© 2013 - 2026 Horizon Dynamics LLC — All rights reserved.

Right Solution For True Ideas

Blog/The Legacy System Modernization Playbook: Strangler Fig, Not Big Bang
Modernization11 min read

The Legacy System Modernization Playbook: Strangler Fig, Not Big Bang

Oleksandr Melnychenko·February 17, 2026
Legacy SystemsModernizationArchitectureMigrationStrangler Fig

The Rewrite Trap

Every few years, an engineering team looks at a legacy codebase and says: "We should rewrite this from scratch." Six months later, the rewrite is behind schedule, missing features the old system had, and the business is running two systems in parallel — both poorly.

The rewrite-from-scratch approach fails for predictable reasons:

  • The old system encodes years of business rules that nobody fully documented. Some are in code. Some are in config files. Some exist only in the heads of people who left the company.
  • The business doesn't stop while you rewrite. Every new feature must be built in both systems during the transition.
  • Rewrites always take 2–3x longer than estimated, because you discover edge cases the old system handles that you didn't know about.

We've been called in to rescue three "complete rewrite" projects that stalled after 12+ months. In each case, switching to an incremental migration approach delivered results in weeks that the rewrite hadn't achieved in months.

The Strangler Fig Pattern

Named after the strangler fig tree that grows around an existing tree and gradually replaces it, this pattern lets you modernize a system incrementally:

  1. Identify a seam — a boundary in the old system where you can intercept requests
  2. Build the new implementation for a specific capability
  3. Route traffic to the new implementation while keeping the old one available
  4. Verify the new implementation produces correct results
  5. Retire the old implementation for that capability
  6. Repeat for the next capability
                    ┌──────────────┐
                    │   API Layer  │
                    │   (Router)   │
                    └──────┬───────┘
                           │
              ┌────────────┼────────────┐
              │            │            │
        ┌─────▼─────┐ ┌───▼────┐ ┌────▼─────┐
        │ New Auth   │ │ Legacy │ │ New      │
        │ Service    │ │ Monol. │ │ Billing  │
        │ (migrated) │ │        │ │ (migrated│
        └───────────┘ └────────┘ └──────────┘

The key insight: at any point during the migration, the system works. There's no "big bang" cutover moment where everything could fail.

Step 1: Assess Before You Touch Anything

Before writing a single line of new code, you need a clear picture of what you're working with.

Dependency Mapping

Map every external integration, database dependency, and inter-service communication:

// What you're looking for:
const systemMap = {
  externalAPIs: ["payment-gateway", "email-provider", "CRM-webhook"],
  databases: ["postgres-main", "redis-cache", "legacy-oracle"],
  internalServices: ["auth", "notifications", "reporting"],
  sharedState: ["session-store", "feature-flags", "config-db"],
  scheduledJobs: ["nightly-reconciliation", "weekly-reports", "hourly-sync"],
};

Risk Classification

Not every part of the system has equal risk. Classify components:

  • Low risk — internal tools, admin dashboards, reporting. If it breaks, it's inconvenient but not harmful.
  • Medium risk — customer-facing features that have workarounds. If the new search is down, users can still browse by category.
  • High risk — payment processing, authentication, core business logic. Errors here mean lost revenue or lost trust.

Always start with low-risk components. They give your team practice with the migration pattern before tackling the critical paths.

Data Inventory

The hardest part of any migration is the data. Document:

  • Schema differences between old and new systems
  • Data that needs transformation during migration
  • Data that's referenced by external systems (and can't change format)
  • Historical data requirements (do you need 7 years of transactions, or can you archive?)

Step 2: Build the Routing Layer

The routing layer is the foundation of the strangler fig. It sits in front of the legacy system and decides which requests go to the old system and which go to the new one.

Feature Flags for Traffic Routing

// Route based on feature flags
async function handleRequest(req: Request) {
  const route = resolveRoute(req);

  if (await featureFlag.isEnabled(`migration.${route.service}`, req.user)) {
    return proxyToNewService(route, req);
  }

  return proxyToLegacySystem(route, req);
}

This lets you:

  • Canary deploy — route 1% of traffic to the new service first
  • User-based rollout — migrate internal users first, then beta customers, then everyone
  • Instant rollback — flip the flag and all traffic goes back to the legacy system

Dual-Write During Transition

For data mutations, you may need to write to both systems during the transition period:

async function createOrder(orderData: OrderInput) {
  // Write to new system (source of truth)
  const order = await newOrderService.create(orderData);

  // Also write to legacy system (for features not yet migrated)
  try {
    await legacySystem.createOrder(transformToLegacyFormat(order));
  } catch (err) {
    logger.warn("Legacy dual-write failed", { orderId: order.id, error: err });
    // Don't fail the request — legacy is secondary
  }

  return order;
}

Dual-write is inherently dangerous — you can end up with inconsistent data between systems. Use it only during the transition period, keep it as short as possible, and always designate one system as the source of truth.

Step 3: Migrate One Service at a Time

Choosing What to Migrate First

Pick your first migration target based on:

  1. Low coupling — minimal dependencies on other legacy components
  2. Clear boundaries — well-defined inputs and outputs
  3. High value — the team learns the most, or the business benefit is clearest
  4. Low risk — if something goes wrong, impact is contained

Authentication is often a good first target: it has a clear interface (credentials in, token out), it's relatively self-contained, and modernizing it immediately improves security.

The Migration Checklist

For each component you migrate:

  • [ ] New service handles all functionality of the legacy component
  • [ ] API contract matches (or adapters translate between old and new formats)
  • [ ] Data is migrated and verified
  • [ ] Monitoring and alerting are configured
  • [ ] Rollback plan is documented and tested
  • [ ] Performance benchmarks meet or exceed the legacy system
  • [ ] Integration tests cover all known edge cases
  • [ ] Feature flag allows gradual traffic shift

Step 4: Data Migration Strategies

Online Migration (Zero Downtime)

For systems that can't afford downtime:

  1. Set up Change Data Capture (CDC) on the legacy database
  2. Perform initial bulk migration of historical data
  3. Stream ongoing changes from CDC to the new database
  4. Verify data consistency by comparing both databases
  5. Switch reads to the new database
  6. Switch writes to the new database
  7. Decommission the CDC pipeline
Legacy DB ──── CDC Stream ────► New DB
   │                              │
   └──── Reads (old) ────┐       │
                          ▼       ▼
                     Comparison Job
                    (verify consistency)

Offline Migration (Maintenance Window)

For systems that can tolerate scheduled downtime:

  1. Announce a maintenance window
  2. Stop writes to the legacy system
  3. Run migration scripts
  4. Verify data integrity
  5. Switch traffic to the new system
  6. Monitor closely for 24–48 hours

Offline migration is simpler but requires business buy-in for the downtime.

Common Modernization Patterns

Pattern 1: Monolith to Services

Don't extract all microservices at once. Start with the bounded contexts that change most frequently or have the most scaling needs:

Before:  [Monolith: Auth + Users + Orders + Reports + Billing]

Phase 1: [Monolith: Users + Orders + Reports + Billing] + [Auth Service]
Phase 2: [Monolith: Users + Orders + Reports] + [Auth] + [Billing Service]
Phase 3: [Monolith: Users + Reports] + [Auth] + [Billing] + [Orders Service]
...

Pattern 2: Database Per Service

The legacy system has one database with 200 tables. As you extract services, each gets its own database:

  • Auth service → auth database (users, sessions, permissions)
  • Billing service → billing database (invoices, payments, subscriptions)
  • Remaining monolith → shared database (everything else)

Pattern 3: API Gateway Introduction

Place an API gateway in front of the legacy system. It becomes the routing layer for the strangler fig:

// API Gateway routes
const routes = {
  "/api/auth/*": "auth-service:3001",
  "/api/billing/*": "billing-service:3002",
  "/api/*": "legacy-monolith:8080", // everything else stays on legacy
};

Measuring Progress

Track these metrics to know your migration is on track:

  • Traffic ratio — percentage of requests handled by new vs. legacy systems
  • Error rate delta — new system error rate should be equal to or lower than legacy
  • Latency comparison — new system should meet or beat legacy p50/p95/p99
  • Legacy surface area — lines of code, endpoints, or database tables remaining in the legacy system
  • Rollback frequency — how often you need to revert to the legacy path
Month 1:  [████░░░░░░░░░░░░░░░░] 20% migrated
Month 3:  [████████░░░░░░░░░░░░] 40% migrated
Month 6:  [████████████░░░░░░░░] 60% migrated
Month 9:  [████████████████░░░░] 80% migrated
Month 12: [████████████████████] 100% — legacy decommissioned

What We've Learned

After modernizing legacy systems for healthcare organizations, financial institutions, and infrastructure companies, here are the patterns that hold:

  1. Incremental always beats big bang. Every time. No exceptions in 13 years of doing this.
  2. The routing layer is the most important architectural decision. Get this right, and everything else becomes manageable.
  3. Data migration is 60% of the work but gets 10% of the planning. Flip that ratio.
  4. You will discover undocumented business rules. Budget time for it. Talk to the people who use the system daily — they know things the code doesn't tell you.
  5. Legacy systems aren't always bad. Sometimes the right modernization is upgrading the infrastructure (containers, CI/CD, monitoring) while keeping the application logic intact.

If you're staring at a legacy system wondering where to start, we've been there — many times. The path forward doesn't require a rewrite. It requires a strategy.

Related Articles

Engineering9 min read

What Makes Software Mission-Critical — And Why Most Agencies Can't Build It

Blood product logistics. Billion-dollar tunnel infrastructure. 2.5 million daily medication decisions. Here's what separates mission-critical software from everything else.

Mission-CriticalArchitectureReliability+2
Oleksandr MelnychenkoFeb 15, 2026
Engineering12 min read

Real-Time Data Pipeline Architecture: From Sensors to Dashboards

How we architect systems that ingest millions of data points per hour from IoT sensors, process them in real time, and display actionable insights — with zero data loss.

Data PipelineReal-TimeIoT+2
Oleksandr MelnychenkoFeb 10, 2026
All Articles