ADD Engineering Leadership Deck
CTO + Senior Engineering Leaders 01 / 06

Slide 01

You Cannot Read Your Way Into This.

CTO + Engineering Leaders
Core claim

You've been reading about AI. Talking to vendor partners. Evaluating options. None of that is the same as building with it. And the difference is everything.

A startup founder put it plainly: "You bought a book on nuclear submarines. You're a surface vessel guy — two dimensions, combustion engines. You can't read your way into going three-dimensional and atomic. It doesn't work that way."

The gap A 6-engineer startup with 2 CPAs who learned to code last year is eating market share from competitors with 50-person engineering teams. Not because they read more. Because they built more.

Slide 02

You Flew to Seattle and Dublin Before Picking a Hyperscaler. AI Is More Consequential. Why Are You Reading About It?

The due diligence gap
How you evaluated cloud infrastructure

You toured three datacenters. You flew to Seattle, Northern Virginia, Dublin. You met executive teams. You smelled the diesel in the tanks. You looked at the Caterpillar generator log books.

That was critical infrastructure. You wanted to see the actual thing, not read a spec sheet. You made the same distinction for your first engineering job — you learned by working inside the system, not by reading about it.

How you're evaluating AI

Reading. Talking to vendor partners. Evaluating options. Forming committees. Commissioning reports.

AI-augmented development changes every mental model you have about how software gets built: estimation, code review, testing, deployment, team structure, sprint planning. All calibrated for human developers at human speeds. All wrong now.

The irony A tool that changes everything about how your engineering organization works deserves more than a vendor briefing. It deserves your hands on the keyboard.

Slide 03

Your Mental Models for Estimation, Code Review, and Testing Are Calibrated for Human Developers. They Are Wrong Now.

The calibration gap
Estimation

Story points, sprints, planning poker

All calibrated to how long it takes a human to type, think, debug, and commit code. When an agent generates a working implementation in ten minutes that would have taken a senior engineer two days, every estimate in your planning system is wrong by a factor you haven't measured.

Code review

Review at human generation speed

Your review process was designed for PRs that took days to write. Now they take minutes. The review can take longer than the writing. Your process doesn't account for that inversion. Nobody's changed it because nobody's experienced it from the inside.

Testing

Test strategy for human-written code

AI generates code. AI generates tests. Both in minutes. Your QA model assumes the bottleneck is test coverage. The actual bottleneck is now whether anyone with domain knowledge validated that the tests prove the right things. You can't know this without building it yourself.

Slide 04

6 Engineers. 2 CPAs Who Learned to Code Last Year. Eating Market Share From Your 50-Person Team.

Competitive economics
Startup team 6 engineers

Two are CPAs who learned to code last year. Four are recent graduates. They understand the accounting domain and AI handles the parts they don't know yet. They are shipping features faster than your team.

Your team 50+ engineers

Running two-week sprints. Arguing about story points. Building like it's 2020. Not because your engineers are less capable — because your operating model was calibrated for a different era and nobody with authority has changed it.

The difference Hands-on

The startup founder built with AI from day one. Understands from direct experience what the tool changes. Makes decisions from that understanding. You're making decisions from articles, vendor briefings, and secondhand reports.

What the CPAs know They understand the domain. AI handles the code they don't know yet. That combination — domain expertise plus AI capability — is outperforming pure engineering headcount. Your org structure doesn't account for this yet.

Slide 05

Your Team Needs to Become Specification Writers and Context Architects. Not Prompt Crafters. Architects.

What changes operationally

What actually works with AI

  • Give context first. "We're in the payment processing module, cutting latency without breaking PCI compliance." Not "optimize this."
  • Ask for the plan. "Walk me through your approach before you write anything." Two minutes upfront saves two hours of cleanup.
  • Iterate with specifics. "Caching works, but adjust for data freshness requirements." Concrete, not vague.
  • Save patterns. Document what works as your standard approach — the spec becomes organizational knowledge.

What most teams do instead

  • "Fix this code." — no context, no constraints.
  • "Make this better." — better by what measure?
  • "Optimize our process." — which process? to what end?

Giving billion-parameter models vaguer instructions than you'd give a new hire. Then surprised when they don't read your mind.

Systemic shift This isn't about better prompts. It's moving from "the AI screwed up" to "we didn't provide enough context." Same realization you had with microservices and continuous deployment.

Slide 06

Stop Reading About It. Build Something This Week. That Is the Only Path to Understanding What You're Leading.

Decision close
The nuclear submarine problem

You can read every book on nuclear propulsion and still not know what it's like to dive to 800 feet. This is the same. AI changes the physics of software delivery. You have to feel the new physics to lead in them.

Not for leadership theater. Because you literally cannot understand what's changing without hands-on experience. Your next sprint planning, your next team structure decision, your next AI investment — all depend on a ground truth you don't yet have.

The minimum Build one real thing with AI this week. Something that touches production. Pay attention to where your assumptions break. Those breaks are the curriculum.