Questions that reveal the gap
- "How do we train AI on our specific Jira workflow?"
- "Can AI learn our code review standards?"
- "How do we get AI to follow our deployment approval process?"
- "Can AI tell me how many story points this is?"
Slide 01
A Fortune 500 executive asked me last month whether we could help their QA team teach AI their "really unique testing process." That same week, a startup with eight engineers and AI agents shipped more tested, production-ready features than that enterprise's 400-person engineering organization shipped all quarter.
Slide 02
An IT director at a major hospital had an executive who asked seriously — with a detailed workflow — why they couldn't send faxes from a corporate smartphone. The executive wasn't stupid. They didn't understand what smartphones replaced.
"How do we teach AI our Jira workflow?" is the same question. It assumes AI fits into the existing process. It misses that AI eliminates the constraints that made the process necessary.
Eighteen months of conversations with CTOs, VPs, and delivery leaders. Organizations that ask the wrong questions spend a year optimizing workflows AI was built to make obsolete.
Within five minutes of a conversation, I can tell whether someone understands AI's capability or whether they're still operating in the old paradigm.
The diagnostic works on every team, every industry, every org size
Slide 03
Slide 04
The tests were brittle, broke with every refactor, and took more time to maintain than they saved. So you hired people to click through workflows instead.
When someone asks "How do we teach AI our testing process," they're revealing they don't understand that AI can write the comprehensive automated tests you never wrote because they were too expensive. The entire premise of the question misses what AI is capable of.
Companies restructuring from many teams to far fewer. Not layoffs — because coordination overhead became eliminable debt once they understood what AI could do.
QA engineers retrained as developers using AI to write better tests than manual QA ever could. People who understand edge cases make great engineers when you remove the coding bottleneck. Quality up. Cycle time down.
Requirements documentation eliminated. Product and engineering collaborating directly with AI capturing context. Translation debt gone.
Slide 05
AI agents. Understood what AI can actually do. Shipped more tested, production-ready features in one week than the enterprise shipped all quarter.
Fortune 500. 400-person engineering organization. Still asking how to teach AI its QA workflow. Shipping less. Paying more. Per-feature cost that cannot compete.
The 400-person org does not close this gap by giving every engineer a Copilot license. The gap is in how the work is structured, not how fast individuals type.
Your team knows where the debt is. They just don't understand yet that AI's capability lets them pay it down. Once they see it, transformation accelerates naturally.
The bottleneck is understanding, not effort
Slide 06
Reading, watching demos, and attending conferences does not close the gap. The only way is to watch AI agents do real work on your actual codebase, your actual backlog, your actual test suite. Not a sandbox. Your environment.
When your QA lead asks how to teach AI your testing process, nobody has shown them what AI can actually do. When your architect asks about AI code review standards, they don't yet see what changes. These are leadership opportunities — not team failures.
The fastest organizations didn't mandate AI adoption. They created conditions where people could see what was possible. Once your VP of Engineering watches an AI agent generate comprehensive tests for a feature that would have taken a QA sprint — the questions change immediately.
Slide 07
The gap is not permanent. It is closable in weeks with the right exposure. But it compounds. Every quarter you optimize an AI tool for an obsolete workflow is a quarter the startup in your market is shipping with a fraction of your headcount.
The Fortune 500 executive with the QA question is not lost. But they need someone to show them what the startup already knows. That exposure does not come from reading. It comes from watching AI do the work — on real problems, in a real environment, with real consequences.