ADD Engineering Leadership Deck
CxO + VP Engineering briefing 01 / 06

Slide 01

You Added AI. You Now Run a Slop Factory.

CxO + VP Engineering
Core claim

Your PR count is up forty percent. Your defect rate is up too — you just have not measured it yet. You accelerated generation 10x and left every downstream process unchanged. That is not adoption. That is a bottleneck migration.

Code that compiles, passes AI-generated tests, follows patterns — often the wrong ones — and introduces subtle coupling across domain boundaries. It is not obviously broken. It is insidiously mediocre. And it is flooding your review queues faster than your senior engineers can absorb it.

Diagnosis You do not have an AI problem. You have a governance problem that AI made visible.

Slide 02

You Moved the Bottleneck. You Did Not Remove It.

Structural problem
PR volume increase +40%

More pull requests. Not more value. Your review queue is drowning in machine-generated code nobody has time to evaluate properly.

Review workload 3x

Senior engineers buried under AI output. A ten-minute generation requires hours of careful review. The math does not work.

Downstream pipeline Same

Review capacity, architectural governance, merge standards, testing strategy — none of it changed. You just turned the faucet up.

You know what happens when you pour ten times more water into a pipe that is the same diameter downstream? It backs up. That is your engineering organization right now.

The pipeline problem

Slide 03

Your Velocity Metric Is Measuring Waste

CFO lens

What slop looks like

  • Compiles successfully. Passes AI-generated tests. Follows patterns — often the wrong ones.
  • Introduces subtle coupling across domain boundaries that no one catches until production.
  • Duplicates existing logic because the agent has limited context windows and no institutional memory.
  • Poor naming, inconsistent abstractions, architectural drift that compounds with every merged PR.

What slop costs

  • Defect rates rising while everyone celebrates velocity metrics that measure throughput, not value.
  • Senior engineers spending review cycles on code they would have written differently in less time.
  • Architectural debt accumulating faster than any team can pay down — because the generation never stops.
  • In lean terms: overproduction. The most expensive form of waste. And you are incentivizing more of it.

Slide 04

Four Structural Changes or You Keep Drowning

Operating model
01

Review capacity planning

Review is now the constraint, not development. A two-day human-written feature needs thirty minutes of review. A ten-minute AI generation needs hours. Fund review capacity explicitly or accept that rubber-stamping is your new quality bar.

02

Upstream architecture review

Architectural decisions happen in design docs and interface contracts before generation starts. You are reviewing the blueprint, not every brick. Move governance upstream or drown in downstream defects.

03

Ruthless testing standards

Testing strategy must focus on user-facing behavior and edge cases, not coverage percentages. AI-generated tests validating AI-generated code is a closed loop. Break it or it breaks you.

04

Meaningful merge criteria

Two approvals and green CI is not a quality gate. It is a rubber stamp. Merge gates must verify architectural conformance, duplication analysis, domain boundary integrity, and genuine understanding from reviewers.

Slide 05

Encode Standards Into Agents, Not Into Review Processes

Implementation
Workflow inversion

The traditional workflow is not a step toward agent-driven development. It is a dead end. Invert it or get left behind.

Current process: develop, then QA tests, then review. Backwards. The emerging model: clear requirements, constrained generation — code and tests together — validation, deploy. Humans guiding. Agents driving.

Smaller teams of engineers who excel at constraining agents. They encode architectural standards, coding guidelines, and testing expectations directly into agent workflows so output meets standards by design, not by review.

Proof point Billion-dollar valuations with single-engineer teams. The economics have already shifted.
The retrofitting problem

If you are already running a slop factory, you face cultural drift. Engineers have internalized "generate first, think second." Reviewers have eroded practices under the weight of overwhelm. Architectural debt has accumulated silently.

Retrofitting governance onto a team that has already internalized bad AI habits is significantly harder than building the right governance from day one. The longer you wait, the more expensive the correction.

Warning Every week of delay compounds the cultural debt. Habits calcify. Start now.

Slide 06

Governance Problem. Not AI Problem. Fix It Now.

Decision close
The choice

You do not have an AI problem. You have a governance problem that AI made visible. The question is whether you redesign now or pay compounding interest on slop for the next two years.

The organizations that got this right asked hard questions before they bought licenses: How do we maintain architectural coherence with cheap generation? What does seniority mean when junior engineers generate code at the same speed? How do we prevent duplication across systems?

The organizations that got this wrong celebrated velocity spikes and called it transformation. They are the slop factories.