After 2 decades in software development, I pieced together a few personal conversations I’ve had former peers who are now senior leaders about adopting AI in the SDLC. Here’s a common theme that’s emerged in late 2025.
Clay called me on a Tuesday night, which meant something was wrong. We’d worked together for six years before he left to take a director role at a company you’d recognize. He doesn’t do social calls. When Clay picks up the phone, there’s a problem he can’t solve and he’s out of people to ask.
“I need to talk through something,” he said. “And I need you to tell me if I’m crazy.”
He’d been running an AI coding agent rollout for three months. Executive mandate, full organizational buy-in, generous training budget, the works. And his team was splitting in half in ways he couldn’t explain.
“I’ve got engineers who picked this up in a week,” he said. “They’re shipping features at twice their previous velocity. It’s everything the vendors promised. But I’ve got other engineers—people with fifteen years of experience, people I’d trust with anything—who can’t get anything useful out of the same tools. Same training. Same documentation. Same access. Completely different outcomes.”
I asked the obvious question first. Age?
“That’s the thing,” Clay said. “I thought it would be generational. It’s not. It’s not even close.”
He told me about Maya, a fresh graduate who’d been on the team for eight months. She was cooking. Shipping faster than engineers with a decade of experience. She’d taken to the agents like she’d been using them her whole career.
And then there was Carl. Thirty years at the company. Hired when they were still writing COBOL. Carl was one of Clay’s best agent users.
“Carl,” I said. “Thirty-year Carl.”
“Thirty-year Carl. The guy comments his code like he’s writing a letter to his future self. He’s been explaining systems to new hires since before I was born. He picked up the agent in a week.”
So who was struggling?
“That’s what’s killing me. It’s not the people I expected. I’ve got mid-career engineers, ten years in, who can’t get anything useful out of these tools. I’ve got senior staff who’ve been here fifteen years. They’re not dumb. They’re not resistant. They just… can’t make it work.”
I asked Clay what he saw when the struggling engineers tried to use the agent.
“They prompt it the way they’d think about the problem themselves. Shorthand. Assumptions. References to things that aren’t written down anywhere. Then they get frustrated when the output doesn’t match what they had in their head.”
I asked what Maya and Carl did differently.
Clay paused. “They explain things. Like, actually explain them. Maya writes prompts like she’s onboarding a new hire. Carl writes prompts like he’s documenting a system for the next thirty years.”
There it was.
Here’s what I told Clay: the engineers who are struggling don’t have a tools problem. They don’t have a prompting problem. They don’t have a generational problem.
They have an externalization problem. They’ve never learned to explain context—or they learned it once and stopped practicing. And now explaining context is the whole game.
The correlation isn’t age. It isn’t seniority. It isn’t intelligence or technical skill. The correlation is whether someone can take the knowledge in their head and put it into words that a collaborator with zero institutional context can actually use.
Maya can do that because she just came from an environment where she had to—school is nothing but explaining your thinking to people who are evaluating it. Carl can do that because he’s been onboarding new hires and writing documentation for three decades. Explaining context is a muscle Carl never stopped exercising.
The mid-career engineers who are struggling? Somewhere along the way they stopped. They got good enough at their jobs that they didn’t need to explain themselves anymore. They navigated complex systems through intuition built over years of pattern-matching. They absorbed context through osmosis—sitting near the right people, overhearing the right conversations, building mental models through thousands of small interactions.
That intuition lives in neurons, not words. They’ve never had to externalize it because they’ve never needed to. The knowledge just was there when they needed it.
When they direct an agent, they get code that technically does what they asked but misses everything they assumed was obvious. The agent didn’t know that this particular service returns 200 OK even on failures. The agent didn’t know the team tried that approach two years ago and it caused an incident. The agent didn’t know the naming convention in this module is different because it was acquired from another company.
The engineer knows all of this. They just can’t say it.
Clay called again two weeks later.
“I’ve been watching more closely,” he said. “You were right. It’s the explaining thing. It’s always the explaining thing.”
He’d started paying attention to how his engineers communicated in contexts that had nothing to do with AI. Pull request reviews. Architecture discussions. Onboarding conversations with new team members.
“The engineers who are good with agents? They’re the same ones who write pull request descriptions that actually help reviewers understand what they’re looking at. They’re the ones who can walk a new hire through a system without assuming knowledge the new hire doesn’t have.”
And the ones who struggle?
“Their pull request descriptions say ‘fixed the bug.’ Their architecture explanations skip three steps because the steps are obvious to them. They’re not bad engineers. They’re just bad at externalizing what they know.”
I gave Clay a mental model that helped me: think about what an AI coding agent actually is from an operational standpoint. Strip away the marketing language. What you have is a collaborator who is technically capable, has solid fundamentals, learns quickly—and is starting from absolute zero on your specific codebase, your architectural decisions, your organizational context.
This is functionally identical to a CS intern who just finished their third year. Book-smart. Capable of producing working code. Completely dependent on you to provide the context they’re missing.
“Maya treats the agent like an intern,” Clay said. “Carl treats it like a junior engineer he’s mentoring. The struggling folks treat it like… I don’t know. Like it should already know what they know.”
That’s exactly it. Maya and Carl are good at explaining context to humans, so they’re good at explaining context to agents. The struggling engineers never built that muscle—or let it atrophy—so they’re stuck.
The third call came a month later. Clay sounded different. Quieter. Like he’d seen something he wasn’t sure he should say out loud.
“I need to tell you something,” he said. “And I don’t know what to do with it.”
He’d been doing one-on-ones with the engineers who were struggling most. Trying to understand where the gaps were. Trying to figure out what kind of training would actually help.
“Some of them can’t externalize context because they never learned how. That’s the first problem, the one we talked about. But some of them…” He trailed off.
Some of them what?
“Some of them can’t externalize context because they don’t actually have it. They don’t understand the systems they’ve been working on. Not really. Not at a level where they could explain it to someone else.”
I let that sit for a moment.
“I’ve got an engineer,” Clay continued. “Fifteen years of experience. Respected. Senior title. And when I asked him to walk me through how one of our core services actually works—not to explain it to an agent, just to explain it to me—he couldn’t do it. He’s been working on that service for three years.”
How is that possible?
“Because for fifteen years, he’s been stitching things together. Copy this pattern from over here. Modify that snippet from Stack Overflow. Follow the template from the last project. The code works. It ships. Nobody asks questions because everything moves slowly enough that you can’t tell the difference between someone who understands the system and someone who’s learned to navigate it without understanding it.”
Clay had stumbled onto something that nobody talks about because it’s too uncomfortable: traditional software development moved slowly enough to hide a lot of sins. When you’re shipping quarterly, when code reviews are cursory, when the same patterns get copy-pasted across projects for years, you can build an entire career on pattern-matching without ever developing a deep mental model of what you’re building.
These engineers aren’t stupid. They’re survivors. They figured out that you didn’t need to understand the system to work on the system. You just needed to know which incantations to copy, which templates to follow, which senior engineer to ask when you got stuck. The system rewarded shipping code, not understanding code.
Now they’re sitting in front of an agent, and the agent is asking them to explain what they want to build. And they’re realizing—some of them for the first time—that they can’t explain it because they don’t actually know.
“The agent is an X-ray machine,” Clay said. “It’s showing me things about my team that I couldn’t see before. I’m not sure I wanted to see them.”
This is the part of the conversation that gets uncomfortable. Because we’re not just talking about a training gap anymore. We’re talking about engineers who may have spent years—decades—building careers on a foundation that was never as solid as it looked.
But here’s what I told Clay: even this is fixable. It’s just a different kind of fix.
The externalization problem has two variants. The first is engineers who have the knowledge but can’t articulate it. The second is engineers who never built the knowledge in the first place because the old system never required it.
For the first group, you’re teaching communication skills. For the second group, you’re teaching the fundamentals they skipped. Both are learnable. Both respond to practice. Neither is a character flaw.
The engineers who’ve been stitching without understanding? They’re not lazy. They’re rational actors who optimized for the system they were in. The system rewarded shipping. It didn’t reward understanding. They did exactly what they were incentivized to do.
The system has changed. The incentives have changed. Now they need to adapt. And the good news is that understanding systems—actually understanding them, at a level where you can explain them to someone starting from zero—is a skill that can be developed at any point in a career.
It just takes longer than a prompt engineering workshop.
Clay called one more time, a few weeks later.
“I figured out what Carl and Maya have in common,” he said. “It’s not that they’re both good at explaining things. It’s that they both actually understand what they’re building. Maya because she just learned it in school and it’s fresh. Carl because he’s been building mental models for thirty years and never stopped.”
He’d started thinking about his remediation program differently. It wasn’t just about teaching people to externalize. It was about giving some of them permission—and support—to go back and learn the things they’d skipped.
“I’ve got a senior engineer who asked if he could spend two weeks just reading the codebase. Not writing code. Just reading. Understanding. Building a mental model he never had.”
What did you tell him?
“I told him yes. And I told him it was brave to ask.”
So what do you actually do about this? The answer still sounds embarrassingly old-fashioned: book clubs, pair programming, coding katas, architecture reviews.
But now we understand why these practices work. They’re not just building the externalization muscle. They’re building—or rebuilding—the understanding that externalization requires.
Book clubs force engineers to articulate technical concepts to peers. But they also force engineers to actually learn the concepts well enough to articulate them. You can’t hide behind pattern-matching when you’re explaining chapter four of Designing Data-Intensive Applications to colleagues who will ask follow-up questions.
Pair programming makes thinking visible. It also makes not thinking visible. When you’re narrating your approach to a partner, you can’t just copy a pattern and hope it works. You have to know why.
Coding katas with an emphasis on explaining your approach before writing code create low-stakes practice. You fail in a safe environment. You learn that the gap between doing something and understanding something is wider than you thought.
Architecture reviews where junior engineers present and senior engineers ask questions flip the usual dynamic. They also expose who actually understands the systems and who’s been navigating by landmarks. Not to punish anyone—but to know where the investment needs to go.
“These aren’t soft skills,” I told Clay. “These are the hard skills now.”
Clay pushed back. “Some of these engineers have been faking it for a long time. What am I supposed to do—fire them?”
I told him that was the wrong frame. These engineers have been doing exactly what the system asked them to do for years. They shipped code. They met deadlines. They got promoted. The fact that they did it without deep understanding is a failure of the system, not a failure of the individual.
The question isn’t whether to punish them. The question is whether to invest in them.
And the answer should almost always be yes—because the skills they need to develop are the same skills everyone needs now. The engineers who can’t externalize need to learn to externalize. The engineers who don’t understand need to learn to understand. Both of these respond to practice. Both of these are valuable regardless of what happens with AI tools.
Give the agents to everyone. Not because struggling engineers will magically improve through exposure, but because the struggling reveals exactly where the development work needs to happen.
Meet people where they are. Some of them have gaps they’ve been hiding for years. That’s not a moral failing. That’s a debt that’s come due.
And move with urgency. Your competitors are figuring this out too.
Clay texted me last week. He’d started the book club, the pair programming rotation, the architecture reviews. But he’d added something else: dedicated time for engineers to just learn their own systems. No tickets. No deadlines. Just reading code, building mental models, developing the understanding they’d never been required to have.
“It feels weird,” he said. “Like I’m admitting that people didn’t know what they were doing.”
I told him he was admitting something different: that the old system never checked. And now he was building a system that does.
He sent a follow-up an hour later.
“Carl volunteered to lead the pair programming sessions. Said it reminds him of how he learned thirty years ago. Maya asked if she could help with the architecture reviews. Said she wants to learn what senior engineers look for.”
The engineer who asked for two weeks to read the codebase? He’s become one of Clay’s best agent users. Turns out once he actually understood the system, explaining it to an agent was the easy part.
The knowledge was always available. The understanding was always possible. We’re just finally building systems that require both.
Engineering leader who still writes code every day. I work with executives across healthcare, finance, retail, and tech to navigate the shift to AI-native software development. After two decades building and leading engineering teams, I focus on the human side of AI transformation: how leaders adapt, how teams evolve, and how companies avoid the common pitfalls of AI adoption. All opinions expressed here are my own.