12 min read
If you are a CTO, VP of Engineering, or senior director reading this, I want you to answer one question honestly. When was the last time you built something? Not approved something. Not reviewed a vendor demo. Not signed off on an architecture diagram someone else drew. Built something. Wrote code that shipped. Designed a system that went to production. Sat with your team and solved a problem by building the solution together.
If the answer is measured in years, this article is about you.
How We Got Here
There was a moment, somewhere in the last decade, when a generation of technology leaders quietly stopped being technologists. The transition was gradual enough that most of them did not notice it happening. One year you are an engineering director who still reviews pull requests and occasionally pairs with your senior developers on hard problems. The next year you are in a role where your calendar is wall-to-wall vendor meetings, steering committees, and quarterly business reviews. The year after that, you cannot remember the last time you opened an IDE.
Most readers also read: Stop Reviewing Code. Start Proving It Works. My Take on AI in the Quality Process of Software.
I am not saying this to shame anyone. The incentive structures in most large organizations actively reward this transition. You get promoted for managing budgets, not for understanding code. You get recognized for vendor negotiations, not for technical judgment. Your performance review measures headcount growth and project delivery dates, and neither of those requires you to understand how the software actually works.
So you optimized for what the system rewarded. You became a professional meeting attendee who happens to have an engineering background. And for a while, that was fine. The technology stack was stable enough. The vendors were reliable enough. The team was senior enough to run without deep technical leadership from above. You could manage by spreadsheet and it mostly worked.
Then AI showed up.
The Vacuum
When the most consequential shift in how software gets made arrived at your organization’s doorstep, you needed technical leadership that could evaluate these tools independently, that could separate the signal from the vendor noise, that could look at a demo and know whether the capability was real or whether it was a carefully orchestrated ten-minute performance designed to close a deal.
You needed leaders who could say “this changes our architecture and here is how” or “this does not actually solve the problem our team has, and here is why.” Leaders with enough current technical depth to understand what AI agents can and cannot do in a production codebase, not in a conference demo with a clean repository and no legacy constraints.
You did not have those leaders. Not because they were incapable. Because the organization had spent a decade selecting against that capability. The leaders who stayed technical got passed over for the ones who were good at vendor management and executive communication. The ones who kept building got told they needed to be “more strategic.” The ones who insisted on maintaining hands-on skills got labeled as unable to delegate.
The people who remained in senior technical leadership roles had, in many cases, not shipped code in five or six years. They could talk about architecture at a whiteboard, but they could not tell you from experience whether the AI coding assistant actually produced reliable output in a complex codebase, because they had not worked in a complex codebase recently enough to have an informed opinion.
That created a vacuum. And vacuums get filled.
Who Filled It
Your vendors filled it. And I want to be clear about something: the vendors are not villains in this story. They are companies with smart people who built products that genuinely work. When they showed up at your door with a solution and your leadership team did not have the technical depth to independently evaluate it, the vendors did what any rational actor would do. They helped.
They helped you build your AI strategy. They helped you define your roadmap. They helped you select which teams would pilot first and what success metrics to use. They offered to train your developers (on their platform, naturally). They connected you with their customer success team, who would check in monthly to make sure you were “getting value” (which in practice meant making sure you were using enough of the product to justify the renewal).
I sat in a steering committee meeting last year at a financial services company (a large one, the kind with a campus and a cafeteria that serves sushi on Thursdays). The agenda was “AI Strategy Review.” Eight people in the room. The CTO, three VPs, two directors, the vendor’s account executive, and the vendor’s field CTO. The vendor’s field CTO presented for forty-five minutes. He had slides with the company’s logo on them, usage metrics from their deployment, and a roadmap for the next two quarters of expanded adoption.
I watched the room. The CTO was nodding. The VPs were nodding. The directors were taking notes. Nobody asked a question that was not about timeline or budget. Nobody asked whether the architecture the vendor was proposing was actually the right architecture for their specific constraints. Nobody asked whether the success metrics the vendor was presenting measured business outcomes or just tool adoption. Nobody asked whether there were alternatives, because nobody in the room had the technical depth to know what the alternatives were.
After the meeting, one of the directors, a woman named Karen who had been an excellent engineer before she got promoted into management seven years ago, pulled me aside. “I have a question I did not want to ask in there,” she said. “Is this actually working? Because my teams are using the tool and I cannot tell if we are getting better or just getting busier.”
Karen’s question was the right question. Nobody else in the room could have asked it, because nobody else in the room had enough residual technical intuition to sense that something was off.
What Vendor-Led Strategy Actually Looks Like
I have seen this pattern at enough organizations now to describe it with some precision. Here is what happens when your vendor owns your AI strategy.
Your training curriculum is built by the vendor. Your developers learn the vendor’s tool, the vendor’s workflow, the vendor’s mental model for how AI fits into development. They do not learn principles that transfer across tools. They do not develop the judgment to evaluate whether a different approach might work better for their specific codebase, their specific team, their specific constraints. They learn to use the product. That is not the same thing as learning to build with AI.
Your success metrics are defined by the vendor. Adoption rate. Lines of code accepted. Prompts per developer per day. These metrics measure tool usage. They do not measure whether your time to market improved, whether your defect rate changed, whether your customers are seeing better outcomes. The vendor reports these metrics to your leadership in a beautifully formatted quarterly deck, and your leadership presents them to the board as evidence that the AI investment is working. (Editor’s note: I have reviewed at least a dozen of these decks in the last year. The correlation between the metrics they report and actual business outcomes is, to put it charitably, unclear.)
Your roadmap follows the vendor’s product roadmap. When the vendor releases a new feature, your organization adopts it. When the vendor deprecates a capability, your organization adjusts. You are not building a strategy around your business needs. You are building a strategy around what the vendor makes available. Your product roadmap and the vendor’s product roadmap have become the same document, and nobody noticed because nobody in leadership had the technical depth to see the difference.
Your architecture reflects the vendor’s opinions. The vendor has a preferred way to integrate their tool into your CI/CD pipeline (continuous integration and continuous deployment, the automated machinery that moves code from development to production). That preferred integration path is the one your team implemented, because the vendor’s field engineer set it up, and your team did not have a strong enough opinion to push back. Whether that integration path is optimal for your specific codebase, your specific regulatory constraints, your specific team topology is a question nobody asked.
The Uncomfortable Question
Could it be that your leadership team has become a procurement function with engineering titles? Could it be that the last real technical decision your CTO made was selecting the vendor, and everything since then has been execution of the vendor’s playbook? Or is it possible that your leaders recognize this dynamic and are uncomfortable with it but do not know how to rebuild the capability they lost, because the skills that got them into their current roles are not the skills required to lead through this transition?
I ask these questions without malice. I have worked with leaders who fit every one of these descriptions, and most of them are thoughtful people who want to do the right thing. The problem is structural, not personal. The system selected for vendor managers. It got vendor managers. And now it needs something different.
What Rebuilding Looks Like
The fix is not changing vendors. You will end up in the same position with a different logo on the quarterly deck. The fix is rebuilding technical leadership capability inside your organization.
This starts with an honest assessment that most leadership teams are not willing to do. Look at your top fifteen technology leaders (the CTO, the VPs, the senior directors, the principal engineers if you still have them). How many of them have shipped code in the last twelve months? How many of them could sit down with a team tomorrow and meaningfully contribute to a technical design session, not as a reviewer but as a participant? How many of them could independently evaluate a new AI tool without relying on the vendor’s demo and the vendor’s benchmarks?
If the answer to any of those questions is “fewer than half,” you have a leadership capability gap. And that gap is the reason your vendor owns your strategy.
Closing the gap requires several things, and none of them are comfortable. It requires carving out time for your senior leaders to build again. Not as a hobby. Not as a “hack day” twice a year. Regularly, with real problems, producing real outcomes. I know a VP at a manufacturing company, a guy named Rich, who blocked four hours every Friday to pair with engineers on his teams. His peers thought it was a strange use of a VP’s time. His teams had the highest retention in the division and the best AI adoption metrics, because when Rich said “this tool works” or “this approach is wrong,” his teams trusted him. He had earned the right to that opinion by doing the work.
It requires changing what you select for in leadership hiring. If your VP of Engineering job description does not include a technical depth requirement, if the interview process does not include a technical evaluation, you are explicitly saying that technical capability is not required for the role. You will get what you select for.
It requires building internal evaluation capability. When a vendor presents a new tool, your organization should be able to run its own independent evaluation with its own criteria, its own test cases, and its own engineers, before the vendor’s field CTO gets access to your environment. Not after. Before. This means you need people internally who are current enough to design that evaluation, run it, and interpret the results.
And it requires being willing to disagree with the vendor. This is the hardest part. Vendors are persuasive. They have data (their data, selected to support their narrative). They have case studies (from companies that may or may not resemble yours). They have smart people who have thought deeply about the problem (from the perspective of selling you their solution). Disagreeing with all of that requires confidence, and confidence requires competence. You cannot disagree with the vendor’s architecture recommendation if you do not have the technical depth to propose an alternative.
The Vendor Is Not the Problem
I want to say this again because it matters. The vendor is not the problem. Your vendor is doing exactly what a good vendor does. They built a product. They hired smart people to support it. They are helping you succeed with it. The fact that their definition of “success” is aligned with your continued use of their product is not a conspiracy. It is a business model.
The problem is that your organization does not have a counterweight. In a healthy vendor relationship, the customer has enough internal capability to say “we evaluated your recommendation and we are going in a different direction on this piece.” The customer has enough technical depth to ask the questions that Karen asked me in the hallway, but to ask them in the meeting, on the record, with enough authority to change the outcome.
When your leadership team cannot do that, the vendor relationship is not a partnership. It is a dependency. And dependencies in your organizational structure are just as dangerous as dependencies in your codebase.
What Happens If You Do Not Fix This
The AI landscape is moving faster than any prior technology shift. The tools that are dominant today may not be dominant in eighteen months. The best practices from this quarter will be outdated by next quarter. The vendor that is right for you now may not be right for you when your needs change, and your needs will change because the technology is changing underneath everyone.
If your vendor owns your strategy and the vendor’s product stops being the best fit, who in your organization has the capability to recognize that? Who has the technical depth to evaluate the alternatives? Who has the credibility with the engineering team to lead a transition?
If the answer is “nobody, because we let that capability atrophy,” then you are locked in. Not by a contract. Contracts end. You are locked in by a capability gap in your own leadership team, and that is a much harder lock to break.
So here is my question. If you looked at your leadership bench today and asked how many of them could lead your AI strategy without leaning on a vendor to tell them what to do, what number would you arrive at? And if that number is not large enough to give you confidence, what are you doing about it this quarter? Because the vendors are not going to fix this for you. This is not their problem to solve. It is yours. And the longer you wait, the harder it gets, because every quarter that passes without rebuilding this capability is another quarter where the vendor’s playbook becomes more deeply embedded in how your organization operates.
What happens to your company if the technology shifts again and the people responsible for navigating that shift do not have the technical judgment to do it? What happens if the next wave of AI capability requires a fundamentally different architecture, and nobody on your leadership team has built anything recently enough to know what that architecture should look like?
Hope is not a strategy. Your vendor’s roadmap is not your roadmap. And the technical leadership capability you need for what comes next is not something you can buy from a vendor. You have to build it.
