Three years ago, reasonable leaders like you debated whether AI would meaningfully change software development. That debate is over. The question now isn’t whether AI matters—it’s whether you’ve already waited too long to lead.
Here’s what’s happening in boardrooms and executive offsites right now: a pervasive anxiety about being behind, combined with a tendency to benchmark against today’s capabilities. Both instincts are understandable. Both will hurt your organization.
The Lag You’re Building Into Your Company
Think about what happens when you greenlight an AI initiative today. Your teams spend months understanding the current tools. They design solutions, build them out, roll them across the organization. By the time you reach real maturity, it’s 2027 or 2028—and the capabilities you built for are two generations behind.
You’ve seen this pattern before. With cloud adoption. With DevOps. With every major platform shift. The executives who waited for clarity got clarity—along with a competitive gap they never fully closed.
The difference now is the rate of change. The distance between 2022 and 2025 in AI capability isn’t linear. Neither will be 2025 to 2028. If your teams are worried about being behind today, imagine the board conversation in 2028 when you’re explaining why your company is just now reaching 2025 capability while competitors have moved on.
Aim Where the Puck Is Going
The leaders getting this right share a common trait: they’re building for capabilities that don’t fully exist yet. Not because they’re reckless, but because they understand how organizational change actually works.
It takes time to rewire how your teams think about specifications. It takes time to build the muscle memory for human-AI collaboration. It takes time for your engineers to develop intuition about what AI does well versus where human judgment matters. It takes time to pay down the technical debt that makes agents stumble—the implicit knowledge buried in tribal documentation, the inconsistent patterns across services, the build systems held together with hope and shell scripts.
It takes time to reorganize. To build out new capabilities. To hire differently, train differently, measure differently.
None of that can be rushed. But all of it can be started.
Your peers who are getting this right are placing bets. Some have their R&D teams pushing toward lights-out development—fully automated pipelines where humans set direction and review outcomes. Others are mapping their entire value stream to identify where AI removes constraints versus where it creates new ones. Some are running parallel experiments across business units, deliberately testing different approaches.
The specific bet matters less than the intent behind it. Don’t wait for the industry to converge on best practices. Run structured experiments, measure outcomes, and build organizational knowledge that compounds over time.
And have these conversations everywhere. With your peers on the executive team, pressure-testing assumptions and aligning on direction. With your direct reports, creating explicit permission to experiment and fail forward. With the board, reframing AI from a cost-optimization play to a capability-building investment with a multi-year horizon.
The Real Risk Calculation
You’ve probably said some version of “we can’t afford to make mistakes with AI” in recent conversations. It sounds like fiduciary responsibility. It’s actually the most dangerous position you can take.
But here’s a harder question: is your capacity so overcommitted that you genuinely can’t afford to experiment? If that’s true, it’s not a reason to avoid AI transformation. It’s a reason to accelerate it. Because at that rate, you’re headed for failure anyway.
The math is brutal. Most engineering organizations spend less than 20% of their capacity actually adding value to the product. The rest disappears into maintenance, toil, and the tax you pay on accumulated complexity. That’s not a sustainable position—it’s a slow bleed. AI-native development is one of the few levers that can fundamentally change that ratio.
The mistakes you make moving toward AI are recoverable. Try an approach, it doesn’t deliver expected value, learn something, adjust. That’s how organizational capability gets built. The mistake of not moving toward AI isn’t recoverable in the same way. You can’t compress three years of organizational learning into six months when you finally decide to move.
The companies that struggled with cloud adoption weren’t the ones who made early mistakes. They were the ones who waited for the “right time” and found themselves permanently behind competitors who had been learning while they were planning.
Your job isn’t to avoid mistakes. It’s to create the conditions where your organization can make small mistakes fast, learn from them, and build capability that competitors can’t easily replicate.
Your Payout
Here’s the upside nobody talks about: if you successfully implement an AI-native software development lifecycle now, think about what the market looks like for you in 2028.
There will be companies—many of them—who haven’t made this transition. They’ll be looking for leaders who’ve done it. Who’ve made the mistakes, built the playbooks, developed the intuition for what works. Three years of AI transformation experience in 2028 makes you extraordinarily valuable to organizations just starting their journey.
This isn’t just about competitive advantage for your current company. It’s about your own trajectory. The skills you build leading this transformation—the pattern recognition, the organizational change muscle, the technical fluency—compound in your career the same way they compound in your organization.
The executives who led cloud transformations a decade ago wrote their own tickets. The same will be true for AI. The question is whether you’ll be the one with the experience or the one hiring for it.
This is your payout. This is the way.
What Building for 2028 Actually Means
This isn’t about predicting specific AI capabilities. It’s about building an organization that can absorb rapid change in how software gets built, how products get delivered, how value gets created.
Treat AI fluency as a core competency across your organization, not an IT initiative. Restructure how your teams think about specifications—because the gap between what humans can articulate and what AI can execute is becoming the primary constraint on velocity. Accept that some of your current process exists to manage human limitations that AI doesn’t share, and be willing to let go of those processes.
Most importantly, lead. Make decisions with incomplete information. Allocate resources to bets that might not pay off. Create explicit permission for your teams to experiment, fail, and learn. Measure what matters—outcomes, learning velocity, capability growth—rather than what’s easy.
The organizations that will be ahead in 2028 aren’t the ones with the best AI strategy decks. They’re the ones whose executives started moving in 2025, made mistakes, learned faster than competitors, and built organizational muscle that can’t be copied.
The worst mistake you can make isn’t the wrong bet on a tool or approach. It’s the decision to wait until the path is clear—because by then, the leaders who moved earlier will have already walked it.
Related Reading
- Your AI Investment Is Failing. Here’s Why.
- What Got You Here Won’t Keep You Here: A Letter to Technology Executives
- The Bottlenecked CEO
New here? Start with our guide to find the right articles for your role.
Engineering leader who still writes code every day. I work with executives across healthcare, finance, retail, and tech to navigate the shift to AI-native software development. After two decades building and leading engineering teams, I focus on the human side of AI transformation: how leaders adapt, how teams evolve, and how companies avoid the common pitfalls of AI adoption. All opinions expressed here are my own.