Your Questions About AI in the SDLC Reveal Exactly Where You Are in the Adoption Curve—And How to Bridge the Gap Before You Waste a Year

An executive leader at a Fortune 500 company asked me last month whether we could help their QA team teach AI their “really unique testing process.”

That same week, a startup with eight engineers and AI agents shipped more tested, production-ready features than that enterprise’s 400-person engineering organization shipped all quarter.

The executive’s question wasn’t wrong. It was revealing. It exposed that they didn’t yet understand what AI can actually do in the SDLC.

That gap in understanding is the difference between organizations that transform and organizations that get transformed.

The Smartphone Fax Question

A friend who’s an IT director at a major hospital told me about an executive who asked why they couldn’t send and receive faxes from their corporate smartphone. Not as a joke. As a serious feature request with a detailed workflow explanation.

The executive wasn’t stupid. They just didn’t understand what smartphones could actually do. They were trying to preserve a fax workflow instead of recognizing that smartphones eliminated the need for faxes entirely.

This is exactly what’s happening with AI in the SDLC.

The Questions That Expose The Gap

I’ve spent eighteen months in conversations with CTOs, VPs of Engineering, and delivery leaders. I can tell within five minutes whether someone understands AI’s capability or whether they’re still operating in the old paradigm.

Questions That Reveal You Don’t Understand AI’s Capability Yet: “How do we train AI on our specific Jira workflow?” “Can AI learn our code review standards?” “How do we get AI to follow our deployment approval process?” “Can AI tell me how many story points this is?”

Questions That Show You’re Starting To Get It: “What work actually disappears when AI has full context?” “Which processes exist because humans needed them versus because customers need them?” “What organizational debt can we finally eliminate?”

The first set assumes AI fits into your existing process. The second recognizes AI eliminates the constraints that created your process.

If you’re asking the first set of questions, you’re not behind because you’re slow. You’re behind because nobody’s shown you what AI can actually do in your environment yet. The good news? That gap can be closed in weeks, not years.

What Everyone Gets Wrong

Here’s what everyone gets wrong: they think AI is going to solve their problems. It won’t.

AI exposes that most of your problems were never actually problems. They were workarounds for constraints that are disappearing.

Take that question about teaching AI your “unique testing process.” It reveals fundamental misunderstanding.

You don’t have manual testers because manual testing is better. You have them because writing comprehensive automated tests was never worth the investment. The tests were brittle, broke with every refactor, and took more time to maintain than they saved.

When someone asks “How do we teach AI our testing process,” they’re revealing they don’t understand that AI doesn’t need to learn manual testing. AI can write the comprehensive automated tests you never wrote because they were too expensive. The entire premise of the question misses what AI is capable of.

Your dev teams can own quality now because AI eliminated the constraint that made comprehensive testing prohibitively expensive. Your test team doesn’t need to spend days clicking through workflows because AI eliminated the constraint that made manual regression testing your best option.

The same pattern repeats across your SDLC. Requirements docs exist because engineers couldn’t efficiently extract context from product conversations. Code review protocols exist because you never had comprehensive tests. Team structures exist because coordinating human developers was expensive.

This isn’t about AI solving problems. It’s about recognizing AI’s capability makes most of your problems obsolete.

The Story Points Question

Perfect example: I’ve been asked by several people at the exec level and the IC level: “Can AI tell me how many story points this is?”

Smart people. Experienced. Decades in the industry. Genuinely trying to understand.

And every time, I recognize the question immediately. It’s the same gap in understanding I’ve seen dozens of times before.

Story points exist because you couldn’t predict velocity when humans were the bottleneck. When AI can ship a feature in a day that used to take a sprint, story points aren’t a metric you need AI to calculate. They’re a metric you need to stop using.

These weren’t dumb questions. They revealed people thinking about AI as a tool that optimizes the existing system instead of a capability that makes the system obsolete.

You can’t blame someone for not understanding AI’s capability when nobody’s shown them. But you can help them close that gap fast.

What Debt Can You Finally Pay Down?

If you were building your SDLC from scratch today, knowing what AI can do, what organizational and technical debt would you not take on?

For most organizations, the answer is most of it.

You wouldn’t create a dev/QA handoff. You wouldn’t build requirements documentation processes. You wouldn’t need code review protocols for mechanical issues. You wouldn’t structure teams around coordination overhead.

All of that was debt. Necessary debt. Smart debt. But debt.

Your team knows where the debt is. They just don’t understand yet that AI’s capability lets them pay it down. Once they see it, transformation accelerates naturally.

What You’ll See If You Ask Around

Ask friends in leadership about their AI transformations. I’m keeping these broad to protect who I talk with, but the patterns are consistent:

Companies restructuring from many teams to far fewer. Not layoffs. Because once they understood what AI could do, coordination overhead became eliminable debt. Engineers owning features end to end. Happier, building more, dealing with less handoff toil. Shipping faster with smaller teams.

Firms retraining QA engineers to be developers using AI to write better tests than manual QA ever could. Turns out people who understand edge cases make great engineers when you remove the coding bottleneck. Quality up. Cycle time down. Former QA engineers building features.

Organizations eliminating requirements documentation. Product and engineering collaborating directly with AI capturing context. The work that matters getting more attention. Translation debt disappearing.

Same pattern everywhere: once people understood what AI could actually do, they stopped asking how to optimize existing processes and started asking which processes to eliminate.

The organizations moving fastest didn’t figure this out over years. They got help bridging the gap in weeks.

The Leadership Opportunity

When your QA lead asks how to teach AI your testing process, nobody’s helped them understand what AI can actually do. When your architect asks about AI code review standards, they don’t yet see that AI’s capability fundamentally changes what code review is for.

These are leadership opportunities to accelerate understanding, not team limitations.

The question isn’t whether your people can learn. It’s whether you’re creating conditions for them to understand what AI is actually capable of, and whether you’re willing to bring in people who can help them see it faster than trial and error allows.

Here’s How To Bridge The Gap

You cannot understand AI’s capability in the SDLC by reading about it.

The only way people actually understand is having it demonstrated live in their environment by people who get it. In your actual codebase, with your actual technical debt, solving your actual problems.

Without that, you’ll keep getting questions that reveal misunderstanding. “Can AI tell me story points?” “How do we teach AI our unique testing process?”

These are questions from smart people who haven’t experienced what AI can actually do yet.

You can’t see it from conference keynotes or blog posts or vendor pitches. You see it when someone sits with your team, opens your repo, and shows them what becomes possible. When they watch comprehensive tests get written in real time. When they see two-year-old technical debt paid down in an afternoon. When they experience shipping without coordination overhead they thought was just “how software development works.”

That’s when understanding clicks.

Reading about AI in the SDLC is like reading about learning to drive. Useful context. Completely insufficient for understanding what it actually feels like.

Your teams need to see it work in their environment. They need to experience the moment when they realize their “unique process” was just an expensive workaround. They need someone who understands AI’s capability to help them ask better questions in real time.

The difference between organizations that waste a year and organizations that accelerate past the gap in weeks? The fast ones brought in people who already understand AI-SDLC to show them what’s possible in their actual environment. The slow ones tried to figure it out through pilot programs and internal experimentation.

You don’t have to waste a year. But bridging the understanding gap requires more than reading articles and attending conferences.

Your Questions Tell You Everything

The questions your teams ask reveal whether they understand AI’s capability.

Are you asking how to preserve existing processes, or whether those processes are still necessary?

Are you asking how to make AI fit your org chart, or whether your org chart was always a workaround for constraints that no longer exist?

Are you asking how to maintain current productivity, or what becomes possible when you understand what AI can actually do?

The organizations transforming fastest aren’t smarter. They understand the capability better. And they got there faster because they didn’t try to figure it out alone.

Your teams are ready. They’ve been living with the workarounds. They know which processes exist because the alternative was too expensive. They just need someone to show them what becomes possible when those constraints disappear.

The transformation starts when you stop asking how to teach AI your old processes and start asking which processes exist only because you didn’t understand what AI could do.

Once you understand the capability, the answer becomes obvious. None of your processes were solving problems. They were all working around constraints.

AI eliminated the constraints. Now you can fix the underlying issues instead of building more sophisticated workarounds.

But only if you understand what AI can actually do. And only if you’re willing to get help bridging that gap instead of spending a year learning through expensive trial and error.

The gap between where you are and where you need to be isn’t that wide. But it won’t close itself. And every month you spend asking the wrong questions is a month your competitors spend eliminating the processes you’re trying to optimize.

Bridge the gap fast. Get help from people who understand AI-SDLC. Don’t waste a year figuring out what could be learned in weeks.

It’s available right now.