ADD Engineering Leadership Deck
CTO + Board briefing 01 / 07

Slide 01

The Third Attempt Is the Most Dangerous One.

CTO + Board + Director
Core claim

The tooling is real. AI agents can read a million lines of code, map dependency graphs, write characterization tests. That does not change the physics that killed your first two attempts.

One fictional CEO spent $47 million across three failed modernization initiatives before he changed the approach. The monolith won every time — not because of bad tools, but because of organizational gravity. The pitch you are about to hear will sound different. It is not.

Core question Can the person across the table from you tell the difference between a clean seam and a cut that bleeds for six months? Ninety seconds will tell you.

Slide 02

$47 Million. Three Attempts. Same Structural Mistake Each Time.

Economics
Attempt 1 $4M

Full rewrite. 20 engineers. 47 services. Rewrote 2. Synchronization layer back to the monolith more complex than the original code. Deprioritized at month 14.

Attempt 2 $3M+

Strangler fig. 18 months. Extracted 4 modules. 36 still in the monolith. Three months to map the first seam. CTO who sponsored it left nine months after.

Attempt 3 Pending

AI agents are real. The tooling changes the economics of legacy rescue. The organizational gravity that killed attempts one and two has not been repealed.

The monolith won. It always wins the first time. Because rewriting a legacy system is not an engineering problem. It is a physics problem. And nobody accounted for the physics.

Organizational gravity is the constraint — not tooling

Slide 03

Three Names. Ninety Seconds. It Has Never Been Wrong.

The vetting test

The three names

  • Feathers. If they say Michael — Michael Feathers, Working Effectively with Legacy Code, characterization testing and seam identification — you are in a real conversation. If they do not know who Michael Feathers is, walk them out politely.
  • Martin. If they say Fowler — strangler fig, refactoring catalog — that is a second good signal. Then push: ask them when the strangler fig fails. A practitioner knows. A presenter redirects.
  • Cyclomatic complexity. Watch their face. A pause. A redirect to "code quality metrics." A tool name instead of an explanation. Those are your answer. A practitioner explains this the way a chef explains knife technique — from muscle memory.

Who does not pass this test

  • The large consultancy partner who led legacy rescues before AI agents existed. They know the patterns but have not integrated agents into the discipline.
  • The AI-native engineer who demos beautifully but has never stood in front of a 15-year-old monolith and felt the weight of it.
  • The vendor's professional services arm. Accenture, Deloitte — they do not have the practitioner-plus-AI combination yet.
  • The person who can pass the test is rare. They work at small firms or on their own. They do not have a slide deck. They have a GitHub history.

Slide 04

Your Organization Is a Gravitational Field. The Monolith Is the Center of Mass.

Why it keeps failing
The gravity problem

Sprint cadence. Planning rituals. Approval chains. Deployment process. Incentive structures. All of it pulls modernization effort back toward the center of mass.

The engineer who follows the existing process gets a clean review. The engineer who tries the new extraction pattern has to explain it to three people, get an exception approved, and defend it in a retro when something breaks. People optimize for the path of least resistance.

Physics Your planning process chops extraction into two-week sprints because that is the only shape your system knows. Legacy extraction does not fit that shape.
The pitch that always fails

"We will embed with your teams. We will refactor the monolith within the constraints of your existing organization. We will coach your engineers along the way so that when we leave, your people own the new architecture."

That has never worked. Not the first time. Not the second time. Not with AI agents. The combination of refactoring inside the organization that produced the monolith while simultaneously upskilling the teams that maintain it does not produce the outcome you are paying for.

Root cause This is not willpower. Not talent. Not budget. This is physics. And the physics have not changed.

Slide 05

Refactor First. In Isolation. Transfer Ownership Second.

What actually works
Isolate

Structurally separate from the gravitational field

Separate governance. Separate deployment pipeline. Separate decision-making authority. No sprint planning. No architecture review board. No change advisory board. A small team — three people, maybe four — working with AI agents the way a surgeon works with instruments.

Extract

Module by module, seam by seam

Characterization tests against undocumented modules at a pace that would have taken a human team months. Dependency graphs across the full codebase in hours instead of quarters. Real seams — not the ones on the 2019 architecture diagram — based on what the code actually does.

Transfer

Move into a house that is standing

Your engineers learn the new patterns by working in a codebase built with those patterns. They own the new system because the new system was built to be owned. The transfer happens on a timeline that makes sense, into an architecture ready to receive them.

Key insight Your teams do not learn while the house is being rebuilt. They move into a house that is standing. That is the only transfer that works.

Slide 06

What the Agents Actually Change — and What They Do Not

AI in legacy rescue

What agents change

  • Map dependency graphs across a million-line codebase in hours, not quarters.
  • Write characterization tests against undocumented modules at speeds no human team can match.
  • Identify extraction boundaries across the full codebase simultaneously.
  • Generate scaffolding for new services once the seam is identified.
  • Compress a rescue timeline from years to months — when led by a practitioner.

What agents do not change

  • Organizational gravity. Your sprint cadence still chops extraction into the wrong shape.
  • Seam judgment. The practitioner still decides where to cut and when a cut will bleed for six months.
  • Structural isolation. The work still has to happen outside your existing governance model.
  • The strangler fig's failure modes. Tangled seams are still tangled. Side effects still cross boundaries.
  • The ninety-second test. Better tooling does not compensate for a presenter who has never done this before.

Slide 07

The Next Vendor Presentation Is in Two Weeks. Run the Test First.

Decision close
Three names before the next meeting

Feathers. Martin. Cyclomatic complexity. Ninety seconds. Before you let them run a demo, before they show you the ROI slide, before the steering committee assembles.

The third time can work. But only if the person leading it has done this before with AI agents — not before AI agents, and not only with AI agents. That combination is rare enough that finding it is the first job.

The structural isolation is non-negotiable. No sprint cadence. No architecture review board. No embedding inside the organization that produced the problem. Separate governance, separate deployment, separate decision rights.