ADD Engineering Leadership Deck
CTO + CRO + CEO briefing 01 / 05

Slide 01

Your Best AE Didn't Pick Salesforce. Your Best Engineer Shouldn't Pick Their AI.

CTO + CRO + CEO
Core argument

You standardized your sales tools years ago. Your competitors are doing the same with AI development — and measuring what you can't.

Your top AE — the one closing $5M deals — didn't pick Salesforce. Your CRO did. That rep adapted in two weeks and got back to selling. You did this because the alternative is catastrophic. AI development follows identical physics.

The gap Your competitors standardized six months ago. They're measuring what works, killing what doesn't, and iterating weekly. They're compounding organizational learning while you compare apples to oranges across fifteen toolchains.

Slide 02

Imagine New York on Salesforce, London on HubSpot, Remote on Pipedrive, Enterprise on Airtable. You Can't Forecast. You Can't Scale. You Can't Even See What's Working.

The disaster you already prevent
Why this would be catastrophic in sales

You can't answer your board's question: "What's our sales cycle?" Everyone tracks differently. No common data. No benchmarks. No replication.

  • Can't forecast because each team tracks pipeline differently.
  • Can't identify why London closes 30% faster than New York — different metrics, different tools, different data.
  • Can't replicate what works because you can't see what's working.
  • Can't scale what's working because you can't measure it across teams.
  • Someone wants to use a legal pad and Rolodex? "1975 wants its process back."
Why this is happening in engineering right now

Every developer picks their own AI platform. You just recreated the HubSpot/Salesforce/Airtable nightmare — with code instead of deals.

  • Can't compare output quality across teams when tools are different.
  • Can't measure cycle time improvement when toolchains diverge.
  • Can't identify which AI workflows produce better outcomes.
  • Can't replicate what works because you can't see what's working.
Result Fifteen toolchains. Zero organizational learning. Your competitors are compounding while you're comparing apples to oranges.

Slide 03

The Developers Who Won't Adapt to Excellent Standardized Tools Aren't Senior Talent You Need to Retain. That's a Performance Problem Wearing a Hoodie.

How to handle pushback
Your top AE 2 weeks

Adapts to excellent standardized tools and gets back to selling. Doesn't argue about whether Salesforce is optimal. Focuses on outcomes. This is what senior talent looks like.

Engineer resistance Same physics

Your best developers will adapt to excellent standardized AI tools in two weeks, maybe three. The ones who won't are optimizing for personal comfort over company outcomes. That's a performance problem.

What the resistance reveals Priorities

An engineer who insists on personal tool choice over organizational measurement capability is telling you something about their priorities. They're not wrong to have preferences. They're wrong to treat preferences as non-negotiable.

The reframe You're not taking away a perk. You're making a strategic decision that determines whether you can measure, manage, and scale your AI development capability. That is a leadership decision, not a developer preference question.

Slide 04

Standardize So You Can Measure. Measure So You Can Improve. Improve So You Can Scale. This Compounds Organizational Learning Quarter Over Quarter.

What you're actually buying

With standardization: what becomes possible

  • Measure output quality, cycle time, and cost efficiency across all teams on a common baseline.
  • Identify which AI workflows produce the best outcomes — and replicate them across the org.
  • Kill what doesn't work. Invest more in what does. Weekly improvement cycles.
  • Build organizational knowledge about AI-augmented development that compounds over time.
  • Answer your board's question: "Is our AI investment working?" With data.

Without standardization: what you're stuck with

  • Fifteen toolchains. Zero cross-team measurement.
  • No way to know which teams are getting the most from AI — or why.
  • No replication of what works. Every team reinventing the same patterns.
  • Organizational learning reset every quarter when someone changes their setup.
  • A $2 million AI investment you can't evaluate because you can't measure it consistently.

Slide 05

The Companies Who Move First Will Be Measuring and Improving While You're Still Debating. That Gap Doesn't Close. It Widens.

Decision close
The decision in front of you

Stop treating AI tool selection like a perk. It's a strategic decision that determines whether you can measure, manage, and scale your AI development capability.

Your sales organization figured this out in 2005. They didn't ask every rep to vote on their preferred CRM. They made a strategic decision, communicated the why, and held the standard. Your engineering organization needs to do the same thing. By Q1.

The companies who move first will be measuring and improving while you're still debating. That gap doesn't close. It widens. Every week you delay is a week your competitors compound their learning and you don't.

Timeline Your competitors standardized six months ago. They're already multiple improvement cycles ahead. The window to catch up is closing.