ADD Engineering Leadership Deck
CTO + VP Eng briefing 01 / 08

Slide 01

Your Codebase Is Not Agent-Maintainable.

CTO + VP Eng + Board
Core claim

Fifty years of "clean code" standards were built for human readers. Your team now includes agents. The code is not ready for them.

The next contributor to touch your code might be an AI agent. Increasingly, it will be. And the question nobody is asking: can that agent reliably modify and extend your codebase without introducing subtle defects? For most codebases, the answer is no. Not because the agents are dumb. Because the code was written for human-only teams — and your teams are not human-only anymore.

The standard Agent-maintainable code. Not clean code. Not well-architected. Can an agent work here safely, right now, with today's models?

Slide 02

Mike Spent Three Weeks Blaming the AI. The Code Was the Problem.

The pattern you will recognize
What happened

A senior data engineer with a decade of streaming experience could not get an agent to maintain his Apache Flink pipeline.

Not compiler-error wrong. Semantically wrong. The kind of wrong that passes tests and breaks in production at 2 AM when the event stream spikes. Mike tried better prompts, more context, different models, RAG over the Flink docs. Three weeks. $11,400 in senior engineer time fighting a problem that was not a tooling problem.

His conclusion "I don't think the agent is the problem. I think the code is the problem."
Why it happened

Flink is excellent software. But its API surface, abstractions, state management semantics — all designed for experienced humans who have internalized the mental models of stream processing over months of work.

An agent does not build intuition. It has statistical confidence. Watermarks, event time versus processing time, keyed state versus operator state — the agent has seen forty thousand Flink pipelines. It has seen ten million Express APIs. Its confidence distribution is wildly different across those two surfaces.

The insight The difference is not intelligence. The difference is statistical exposure.

Slide 03

The $5M Agent-Hostile Code Tax. Run Your Own Numbers.

The formula
Headcount 300

Engineering org size. Multiply by your agent adoption rate — the percentage actively using AI agents for code changes.

Adoption × Rework 60% × 15%

Percentage of engineers using agents, times the percentage of agent-assisted time spent rewriting output, hand-holding through unfamiliar patterns, debugging subtle semantic errors that passed CI.

Annual cost ~$5M

300 × 0.6 × 0.15 × 2,080 hours × $90/hr loaded cost. Your numbers will be different. The formula will not.

If you are spending $1.5M on Copilot Enterprise licenses and your code is not agent-maintainable, you are buying a tool that will underperform its benchmarks in your environment. Agent-maintainable code is not a separate initiative from your AI tooling investment. It is a prerequisite for getting ROI from it.

The formula: Headcount × Adoption Rate × Rework Rate × Annual Hours × Loaded Cost

Slide 04

1.7x More Defects Is the Average. Your Code Is Not Average.

What the numbers really mean
The industry data

AI-generated code produces 1.7x more defects than human-written code. But that is a blended average across agent-friendly and agent-hostile codebases.

In agent-hostile code, the multiplier is worse. In agent-maintainable code, it approaches 1:1. The difference is not the model. The difference is whether your code aligns with the patterns the agent has statistical confidence about.

COBOL reality 220 billion lines of COBOL in active production. 95% of US ATM transactions. Average developer age: 55. 10% of the workforce retires every year. The most agent-hostile code on the planet.
The biases are the feature
Express APIs, React, Django10M+ examples
Spring Boot, PostgreSQL1M+ examples
Flink, custom streaming~40K examples

Structure your codebase so the patterns the agent is most confident about are the patterns it encounters in your code. That is not dumbing down your architecture. That is designing for a new kind of contributor.

Slide 05

Five Principles. The Clean Code Books Had the Right Idea for the Wrong Audience.

The new standard
01

Conventional over clever

That custom monad transformer stack? Elegant. The agent will butcher it every time. The boring standard implementation? Flawless. "Clever" and "maintainable" always had tension. Now it resolves decisively.

02

Explicit over implicit

Agents cannot infer the design decisions you made at the whiteboard three years ago. Types. Names. Contracts. Assertions. If the agent needs to know it, write it down where the agent will see it.

03

Small, self-contained units

Single responsibility for a contributor with a context window, not long-term memory. The more a change requires understanding state outside the file, the higher the error rate.

04 — Standard toolchains

Webpack, Vite, Maven, Gradle, Cargo — massive representation in training data. The agent knows them cold. Your custom Makefile with forty targets and twelve environment variables? The agent is guessing. When agents guess about build configuration, things break in ways that are hard to trace.

05 — Tests as contracts

In a human-maintained codebase, tests verify behavior. When agents contribute code, tests become the contract — the mechanism by which your engineers verify the agent's work before it ships. Tests are not optional. Tests are the foundation.

Slide 06

Five Questions. Ask Them About Any Repository. The Answers Are Your Map.

Assessment
01

Statistical exposure

What percentage of your top-twenty highest-churn files use patterns that appear in fewer than 10,000 public repositories? If your most-modified files use niche patterns, the agent is guessing every time.

02

Cross-boundary state

How many changes require understanding state in a different file, service, or system? Cross-boundary changes are where agents fail hardest. Three services plus a message queue plus a database trigger? Error rate goes through the roof.

03

Agent self-verification

Can an agent run your build, execute your tests, and verify its own changes without human intervention? If your build requires manual steps, environment secrets, or tribal knowledge — you have a human-dependent codebase.

04 — Agent iteration rate

When an agent makes a change, how many attempts to produce something correct? If consistently more than two, the code is the problem. Track this number. It is the leading indicator of agent-maintainability.

05 — Compliance accounting

Does your audit framework account for agent-authored code? A subtle semantic error in payment processing — not a crash, a behavioral drift — is a SOX finding in financial services. A NERC CIP violation carries fines up to $1M per day. Agent-authored code in agent-hostile codebases is a compliance gap most audit teams have not thought about yet.

Slide 07

Same Agent. Same Model. Same Framework. Different Code Structure. It Worked.

Before and after
Iteration rate 6 → <2

Before: six attempts per change to get correct code. After refactoring for agent-maintainability: under two. Same agent. Same Flink framework.

Time per change 30 → 8 min

Agent-wrangling dropped from roughly thirty minutes per modification to about eight. Across 20-30 changes per week: fifteen hours down to four.

Annual savings $54K

Eleven hours reclaimed per week at $95/hour. $54,000 per year. One pipeline. One engineer. The refactoring took four days. Paid for itself in the first week.

What Mike did

  • Replaced custom state management with patterns that followed the most common Flink examples in the documentation
  • Made windowing logic explicit where it had been implicit
  • Broke one large pipeline into smaller, self-contained stages
  • Added property-based tests at every stage boundary

What Mike did not do

  • Did not switch models or write better prompts
  • Did not try a different agent framework
  • Did not add more RAG context or retrieval layers
  • Did not rewrite the pipeline from scratch — refactored it in four days

Slide 08

Your Team Has a New Member. You Did Not Hire It. It Has Opinions About Your Code.

Decision close
The refactoring conversation you need this quarter

In two years — maybe less — the majority of code changes in production systems will be authored or co-authored by agents. That is not a prediction. That is the trajectory.

If your codebase is hostile to agents, you will fall behind teams whose codebases are not. "Hostile to agents" does not mean bad code. It means code that was written for human-only teams. Start with your highest-churn files — the ones modified most frequently. Make them conventional. Make them explicit. Make them boring.

Start here Highest-churn files first. The ones agents will touch first and touch most often. Boring is the new beautiful when your contributor is an AI.