31 min read
After I have the bottleneck conversation with a CxO, I have the training conversation. Every time, in that order. The bottleneck conversation is about the silos, the measurement gaps, and the organizational geometry that hides value from the people paying for it (that one is The Bottlenecked CEO, and it pairs with this one). The training conversation is what this article is about, and it is the one most executives are quietly losing.
Every training conversation I have this quarter opens the same way. Tooling rolled out, licenses purchased, pilots underway, training completed, and a non-trivial percentage of the engineering budget now sitting on an AI line item that did not exist two years ago. And the organization is producing almost exactly what it was producing two years ago. Roadmap where it was. Throughput where it was. The strongest people on the team keep asking, quietly, when the company is actually going to use the tools it paid for.
Then the executive says the line I have heard in every one of these rooms. “We went through the training, we bought the licenses, we ran the pilots, and we are not seeing the value.”
Here is the diagnosis, and there is only one solution. The solution is a new standard for what it means to work at your company, and you, the executive, are the only person who can set it. For the sake of this article I am going to call that standard, applied to the engineering role, the AI Software Engineer standard. You will want your own name for it when you roll it out (I have seen a half-dozen reasonable variations), but having a name for the thing lets us talk about it without confusion, so AI Software Engineer is what I will use here. Your organization is self-measuring its own adoption right now because there is no bar anyone is required to qualify into, and self-measuring and missing standards are the same failure at two altitudes. That is why the engagement survey comes back green while the output stays flat, and why next quarter you are back in this conference room with a bigger tool bill.
From here the rest of the conversation turns to competencies and capabilities. What does the role actually require now, in 2026, on this operating model? What does the organization need to be able to do, at the team level and at the seam between teams? And how do you set a standard that makes both of those answers real to the people in the building, so competency stops being a talking point on a slide and starts being the pass-or-not-yet call a qualified reviewer makes on a real piece of work. That is the terrain this article covers, and it is the terrain your transformation is quietly avoiding.
Standards and Competencies Are Not Certifications
Before anyone assumes where this is going, I want to say the thing clearly. I am not advocating for certifications.
Most readers also read: Stop Reviewing Code. Start Proving It Works. My Take on AI in the Quality Process of Software.
I am not asking you to stand up an internal certification program. I am not asking you to pay a third party to run a two-day workshop and hand out a badge. I am not asking you to add a line on your engineers’ LinkedIn profiles that says “Certified AI-Native Developer” with your company logo on it. The industry has run that play before, repeatedly, and it has not produced the thing it said it would produce. We spent a decade watching Certified Scrum Masters run stand-ups that looked exactly like the status meetings they were supposed to replace. We watched Certified Agile Coaches facilitate the same release-train rituals the organization already had, with new words taped over the old ones. The certification did not transfer the competency. It transferred a receipt.
A standard is different from a certification in three ways that matter.
A standard is about observable work. It is not about whether you sat through the training, it is about whether your output clears the bar in the real codebase, on the real product, under the real governance posture, reviewed by someone who has already shipped at that level. If your qualification path can be completed by someone who has never shipped anything in your company’s environment, you are building a certification and you are going to get certification outcomes.
A standard is held by the people already doing the work. There is no external certifier coming in to grant it. There is no curriculum author in another time zone who defines what “qualified” means. The qualified engineers in your organization calibrate each other and they calibrate the next cohort, and the bar is re-anchored every time the work itself changes. Certifications freeze a snapshot of a standard and date-stamp it. Standards keep moving because the work keeps moving.
A standard has consequences on both sides. If you qualify, your role changes in a way that the organization sees. If you do not, your role does not change in that way. Certifications can be collected without consequence, which is most of the reason people collect them. Standards cannot, because the organization that holds the standard organizes itself around who has qualified and who has not.
Competency, the way I am using the word in this article, is the observable demonstration of the standard. Not a credential. Not a course completion record. Not a quiz score. A piece of work, shipped, reviewed, calibrated against the bar by people who have already cleared it. Everything that follows in this article is about how to design, deploy, and defend that kind of bar. If you come away from this piece shopping for a certification vendor to roll out across the engineering org, you have read the exact opposite article from the one I am writing.
The Training Was Never the Bottleneck
You did not have a training problem. You had a standards problem, and you tried to solve it by buying a curriculum.
An eight hundred person engineering organization does not change because you rolled out AI SDLC tooling and pushed a learning path through your LMS. It changes because leadership defined what “engineer at this company” means now (the tooling, the throughput, the quality bar, the governance posture, all of it), and then required everyone in the role to demonstrate they meet the definition. The training is the onramp. The standard is the road. One without the other takes you nowhere.
When you skipped the standard and went straight to the training, you told the organization it was optional. Optional at scale is a synonym for ignored, and your organization correctly ignored it.
Self-Measuring Is Not Adoption
Here is what self-measuring looks like in the wild. You ask your VPs how adoption is going. Your VPs ask their directors. The directors ask the managers. The managers ask the engineers. The engineers say “yeah, I use it.” That sentence travels back up the chain, lands on a slide, and becomes “78% adoption” on your next board deck.
Nobody lied. Nobody measured anything either.
This is what organizations do when they want the appearance of transformation without the disruption of transformation. The DORA metrics did not move. Lead time looks the same as it did in 2022. Team composition is identical. The survey said 78%, so the program is green, so the CFO signs the renewal, so next quarter we sit in the same conference room and have the same conversation.
Self-measuring is not a data problem. It is a governance vacuum, and the vacuum exists because you, the executive, have not yet said the words that close it.
The Words You Have Not Said
Here are the words. I will write them out so you can paste them into a memo if that helps.
“These are the new standards for working at this company. If you want to continue in your role, here is what you need to demonstrate, here is the qualification path, here is the timeline, and here is the support. This is not a pilot. We are burning the boats.”
Every phrase in that paragraph is load-bearing. Let me walk through the ones executives flinch at.
“New standards.” Not guidelines, not “ways of working,” not a set of principles on a wiki page. Measurable, observable, passable or failable. A thing that says “yes, you are operating at the bar” or “not yet.”
“Qualify into.” A ritual boundary. You were on one side yesterday, you are on the other side today, and something happened in between that the whole team can point to. Without the ritual, there is no change. Your HR partner will tell you this is heavy-handed. They are wrong, and the heaviness is the entire point of the exercise.
“Continue in your role.” Consequence. This is the word executives want to delete, and it is the word that cannot be deleted. Take it out and the standard is a suggestion. Suggestions are precisely how you got here.
“Burning the boats.” This is the signal to the rest of the organization. A one-way door with everyone watching it close, not a phased rollout that can be quietly walked back in Q3 when someone gets nervous.
Why Coaching Does Not Scale to This
I was talking to my friend Lance about this last week. Lance and I spent a decade in large-scale tech coaching groups, the kind that parachute into Fortune 100 engineering organizations and try to install test-driven development across tens of thousands of developers. We genuinely believed, for years, that if we could just get TDD into enough hands the industry would bend. We were wrong. Not about TDD (which is still good practice), but about the mechanism. You cannot coach your way to a new standard at enterprise scale. You can only govern your way there. The coaches I worked with were excellent. The organizations that hired them kept their old standards intact, so the coaching was decorative.
Now with AI, the same lesson is back, louder. A five person team with a real agentic workflow can ship what a seventy person team shipped in 2023. That is not a forecast. That is the throughput curve the organizations that moved are living in right now. And the organizations that moved did not move because they hired better coaches. They moved because leadership set a new bar, required the people in the role to qualify into it, and was willing to have hard conversations with the ones who would not.
Coaching still matters. It is the onramp. The onramp without the standard is a parking lot.
Where You Are Stuck Right Now
Most of the executives I talk to are stuck in one of three places. Sometimes two.
Stuck in the transformation pilot. You have a lighthouse team, maybe two, producing beautiful demos that do not generalize. Why would they generalize? The standard for everyone else’s role did not change. The lighthouse is an exotic. The rest of the org is still measured on the old bar, which means the rational move for every non-lighthouse team is to ignore what the lighthouse built.
Stuck in the developer pilot. You rolled out AI SDLC tooling to a few hundred engineers without telling them what was now expected of them. So they used the new tooling the way they used every previous wave of developer tooling: as an accelerant for the work they were already doing, in the pattern they were already working in. You paid for a new capability and got a faster version of the old capability. This is not an AI problem. It is a role definition problem, and no amount of additional tooling fixes it.
Stuck in governance review. Your risk, legal, security, and compliance functions have a set of questions they wrote for the 2018 version of software development, and they are applying them to the 2026 version. The review takes nine months. During those nine months your competitor with looser governance ships. I am not asking you to ignore governance (governance is real, the questions are real, the regulators are real). I am asking you to recognize that governance has a speed, and right now that speed is the binding constraint on whether your company exists in 2028.
These are the same failure wearing different costumes. You did not define the new role, you did not require anyone to qualify into it, and the organization read the signal correctly and kept doing what it was doing. The organization is not broken. It is behaving rationally inside the incentive structure you set. And it gets worse when you walk a level up and a level out.
Now Look at Your Middle Management
Walk down a level on your org chart. Your directors, your managers, the layer sitting between you and the people doing the work.
Ask yourself a quiet question. Did they actually train?
Not “did they attend the leadership session.” I mean did they sit down at a real codebase, configure the tooling, run an agent through a piece of work, and feel the thing their teams are supposed to feel? In most organizations the honest answer is no. They forwarded the training invite to their team, approved the licenses, and went back to running their staff meeting the way they ran it in 2022.
This is the layer where transformations die. They do not die because middle managers are bad. They die because nobody defined the new standard for the manager role either. You told your engineers the tools were coming. You did not tell your managers that the job of managing engineers had changed. So they kept running stand-ups about story points, measuring teams on velocity, approving pull requests on instinct calibrated to 2022 output. When their teams started working agentically, the managers could not tell the difference between a good agentic workflow and a sloppy one, because they had never run one.
A manager who has not shipped with agents cannot coach someone who is shipping with agents. They cannot calibrate the standard. They cannot spot real output from plausible-looking output, which is the expensive mistake, and it is happening in your organization right now while you read this.
Can they do it? Most of them, yes. The good ones have been asking for air cover to actually build again (my phone reminds me of this every week). Some of them will surprise you. A few will not make the transition, and you will know which ones inside sixty days. That is information you needed anyway.
Why did they not train? Because you did not require it of them, and their calendar is full of meetings that made sense in the old operating model. If you want your managers to operate in the new model you have to give them the time, and you have to require the qualification. Same bar as the engineers. Same demonstration. Shipped work, in the real codebase, reviewed by people who have already qualified.
Now, the honest version of the question. When you apply the standard to the management layer, the question is not only “how do I retrain these people.” It is also how many of those layers you still need. The honest answer in most organizations is fewer. In some organizations, quite a lot fewer. Writing code was probably not your bottleneck. Your org was. The review chains, the approval queues, the handoffs between teams who should have been one team, the status-report theater, the managers managing managers managing managers. Flatten it. Give authority back to the strong individual contributors who are already doing the work (you know who they are, they are the people the rest of the team messages first when something is on fire). Cut the layers that existed to coordinate the old operating model, because the new operating model does not need that much coordination.
Then raise the bar for the managers who remain to the same 2026 standard you are holding everyone else to. A smaller management layer with a higher bar is not a cost-cutting exercise. It is the shape an organization takes when the bottleneck moves from “how fast can engineers produce code” to “how fast can the org decide what to build next and get out of its own way.”
If the manager cannot hold the bar, the bar is not held. That is the whole thing.
It Is Not Only Engineering
Widen the lens. Standards do not stop at engineering.
Your product managers are writing the same one-pagers they wrote in 2022 and handing them to teams that could have drafted, prototyped, and tested three variants in the time it took the PM to write the document. Your designers are shipping static design files into workflows that now iterate on interfaces agentically. Your program managers are running the same status rituals on teams whose work no longer fits the cadence the rituals were designed for. Your compliance reviewers are applying 2018 questions to 2026 artifacts, and your recruiters are sourcing against a job description that was accurate two budget cycles ago.
Each of these roles needs its own standard. The qualification path for a PM looks different from the engineer’s path, but it maps to the same principle: define the role, require the demonstration, provide the onramp, hold the bar.
Set the standard only for engineers and you have built a fast engine inside a slow car. The engineering throughput shows up for about a quarter, and then the rest of the organization’s latency absorbs it. Intake slow, approvals slow, decisions slow. The engineers notice first, and the strongest of them leave first. The whole operating model moves, or none of it does.
What the Move Actually Looks Like
This is the version that works. I have watched it work (I have also watched it fail, usually at step five). Nobody enjoys it while it is happening.
- Define the role. Write the job description for “AI Software Engineer at this company” as if the role had just been invented. What does this person do, with what tools, at what throughput, against what quality bar, with what governance posture? One page, your own words, no committee.
- Define the qualification. A bounded, observable demonstration. A piece of real work, shipped through the real workflow, reviewed by people who have already qualified. Pass, or not-yet. No participation ribbons.
- Set the timeline. Ninety days for the first wave. Your strongest people go first (you are setting the ceiling, not the floor). Pick the people who will make the bar look reasonable.
- Provide the onramp. Frontier model access without a token ceiling, paired time with the people who already qualified, a working body of internal examples, and protected calendar space. The support has to be real, or the qualification is punitive. Punitive qualification is the fastest way to lose the people you needed to keep.
- Hold the line. This is the hard one. Some of your strongest people will not qualify on the first pass. You are going to feel pressure to move the bar. Do not move the bar. Move people, move timelines, move support. The bar is the only thing you do not move.
- Signal across the organization. A note from the CTO, in plain language, that says the new standard is the standard, this is not a pilot, and the people who have qualified are the ones defining what the work looks like here now.
Steps one through four are what every consultancy on earth will sell you a deck for. Steps five and six are what nobody sells you, because they require the executive to personally hold the line against their own organization. That is the job. If you cannot do that part, the rest is theater, and expensive theater at that.
What Step One Actually Looks Like
Executives ask me what the one-page role definition from step one reads like, so here is one. I wrote it the way I would write it if I were sitting next to you with a blank document open. Yours will be different, because your product and your governance posture are different. The shape is the thing.
AI Software Engineer (2026) — Example Role Definition
You build, ship, and operate production software using agentic workflows as your default operating mode. You are not a person who occasionally uses AI assistance. You are an engineer whose unit of work is “orchestrate an agent (or several) against a defined outcome, review the result against the standard, and ship.” You are accountable for the outcome, not the keystrokes.
What you do
- Own the full lifecycle of features, from problem framing through production operation. You write the spec, design the approach, orchestrate the agents that produce the code, review every change with the same rigor you would apply to your own, and carry the pager for what you ship.
- Drive throughput that would have required a team of five to seven people in 2022. This is the expectation, not the aspiration. If your week does not look like that, escalate. We have a bottleneck somewhere and the job is to find it, not to work around it.
- Keep the trunk green. Continuous delivery is the default. If you are branching for weeks you are working in a pattern that does not apply here anymore.
- Operate with governance baked in, not bolted on. Security, privacy, and compliance checks happen inside the workflow, with evidence captured automatically, so review is fast and auditable.
How we work
- Frontier model access is provided, uncapped within reason. Token cost is a cost of goods, not a budget line to ration. If you are throttling yourself on tokens, raise it.
- Pair with other qualified engineers and with agents interchangeably. The pair review is not optional. Code that no qualified human has looked at does not ship.
- Write the tests the agent generates, read them, and throw out the ones that do not carry their weight. Test suites are assets, not trophies.
- Telemetry first. If you cannot observe what you shipped inside an hour of it landing, it is not done.
What you are measured on
- Shipped production change that holds up under real load.
- Lead time from decision to production.
- Quality in the field (escape rate, MTTR, customer-visible incidents).
- The quality of the reviews you do on other people’s work, because the standard is only as strong as the review bar.
We do not measure you on pull request count, story points, hours in the office, or lines of code. Those are 2018 metrics and they are actively misleading on this operating model.
What you are expected to know
- How to build with agents in production, not just in a notebook or a personal side project. This includes prompt and context design, tool-use patterns, evaluation harnesses, and knowing when to reach for a smaller model and when to reach for a frontier one.
- The fundamentals that do not change: data structures, distributed systems tradeoffs, operability, security posture, readability. The agent does not replace any of that. It makes your leverage on those fundamentals larger.
- The tradeoffs between autonomy and oversight. You know when an agent should run unattended and when a human has to sit in the loop, and you can defend the call.
What qualifies you for this role
A demonstration. One piece of real work, shipped through our real delivery workflow, reviewed by engineers who have already qualified, meeting the standard above. Pass or not-yet. No certificates, no training hours, no seat time. We provide the onramp. You provide the evidence.
What this role is not
This is not a role where you maintain somebody else’s platform and write a ticket when you want a change. This is not a role where you pass work to a vendor and review the result. This is not a role where your value is measured by years of tenure or by the number of meetings on your calendar. If those are the roles you are interviewing for, there are companies hiring for them, but they are not us, and they are probably not going to exist in their current form in 2028.
That is one page. Your version will be shorter or longer depending on the product. The test for whether you wrote the right document is simple. Hand it to one of your strongest people and ask “does this describe the job you want to be doing next year, and is the bar clear enough that you know what qualified means?” If they say yes, you have it. If they hedge, keep writing.
What Step One Looks Like for Your Engineering Leaders
Executives ask me for the manager version too, because the engineer JD is only half the picture. Here is one your HR business partner can lift into a requisition. Same bar as the engineer role, applied to the person who leads the team.
Engineering Leader (2026) — Role Definition
You are the front-line owner of a team of qualified engineers operating on our agent-driven delivery model. You own the team’s outcomes, you hold the engineering standard, and you ship alongside the team. You have qualified into our AI Software Engineer standard yourself, and you stay qualified by continuing to ship. This is not a program-management role. If the last time you shipped code to production was before 2024, the role is not for you in its current form, and we will give you a real onramp if you want the seat.
What you do
- Own the business outcomes assigned to your team, from scoping through production operation, with the governance posture our engineering standards require.
- Hold the standard. Every engineer on your team has qualified or is on an explicit, time-bounded qualification plan. You make the pass or not-yet call in partnership with other qualified leaders. You do not move the bar under pressure.
- Ship alongside the team. A meaningful portion of your week is in the code, in the pair-review rotation, using agents. When you stop doing this, your calibration drifts, and the standard drifts with you.
- Remove organizational bottlenecks. You identify the review chains, approval queues, and status rituals slowing your team down, and you escalate or remove them. Writing code is no longer the bottleneck. You are closest to where the new one lives, and you are expected to be vocal about it.
- Lead the team end-to-end: hire, onboard, coach, promote, performance-manage, and transition out when needed. Honest one-on-ones. Evidence-based calibration notes. Ratings you can defend.
- Partner with Product, Design, and Program Management on intake and sequencing. You do not accept a roadmap you cannot ship to the standard, and you do not tolerate one whose value you cannot explain to your engineers in two sentences.
- Carry the pager with the team. Review every significant incident. Own the follow-through.
How we work
- Frontier model access is provided to you and to every engineer on your team, uncapped within reason. Managing tokens is not your job. Managing outcomes is.
- Teams are deliberately small, three to eight qualified engineers. Fewer layers than the last company you worked at. Managers here operate closer to the work than you are accustomed to.
- Recurring meetings have to justify their seat against the alternative of shipping that hour.
What you are measured on
- Team outcomes against the business commitments you signed up for.
- Lead time, quality in the field, and regrettable attrition of qualified engineers.
- The quality of your reviews, calibrations, and promotions. We will read your write-ups and cross-check them against the evidence.
- Your own continued qualification. If you stop shipping, you stop qualifying, and if you stop qualifying you stop leading.
We do not measure you on org-chart size, meeting attendance, or the length of your weekly report. Those were 2018 metrics and they are actively misleading on this operating model.
What qualifies you for this role
Evidence, in three steps. One, demonstrate the AI Software Engineer standard yourself on a piece of real work. Two, review anonymized work from a peer team and provide your assessment; we compare your calibration against our qualified leaders. Three, lead a working session with a real team on a real problem; we observe how you remove bottlenecks, hold the bar, and coach to the standard in real time.
No take-home case studies. No trivia interviews. No whiteboard syntax. We ask you to do the job in miniature.
Required experience
- A demonstrated track record of shipping production software, not of managing teams who shipped it.
- Current, hands-on experience building with agents in production: prompt and context design, tool-use patterns, evaluation harnesses, model selection.
- Prior experience managing engineers with dignity and follow-through, including the hard transitions.
- Fluency in the engineering fundamentals that do not change: distributed systems tradeoffs, operability, security posture, accidental vs essential complexity.
- The ability to hold a standard under pressure. Be prepared to describe a time you held a bar that your organization wanted you to move.
What this role is not
Not a slides-and-status role. Not a role your calendar can substitute for your output. Not a role that rewards tenure in management without the hands-on practice to back it up.
Compensation and Employment
Full-time, benefits-eligible. Our compensation for roles qualifying at this standard is deliberately above market. Salary ranges are posted in compliance with applicable pay transparency laws, and offers reflect the evidence of your qualification and the market for people operating at this bar. Equal opportunity employer.
The Product Manager version, the Designer version, and the Program Manager version follow the same shape. Same About-the-Role stance, same responsibility set built around outcomes rather than ceremonies, same qualifying demonstration (do a piece of the job in miniature, reviewed by someone who has already qualified), same above-market compensation posture. I will not print all three here. The point is the pattern, and the pattern travels: define the role in your own words, require the demonstration, provide the onramp, hold the bar. Repeat it for every function whose standard needs to move.
Yes, You Are Going to Pay These People Above Market
Now the part HR did not want on the agenda.
The people who qualify into this role, internally or externally, are worth more than your current compensation bands say they are. Your bands were built for a 2022 operating model, benchmarked against a 2023 survey, and they are already wrong. You are not paying for years of experience anymore. You are paying for people who operate at the bar you just set, and there are not enough of them yet for the market to have a comfortable price.
Pay them above market. Say it in writing, say it in the offer, and say it to your existing people who qualify so they do not have to leave to find out what they are worth. I have written about this from a few angles already, so I will not repeat the whole argument here. Your HR partner needs to read As CxO, the 2 Things Your HR Needs to Do Different before the next comp cycle closes. The CTO who is already living this problem is the subject of He Cannot Hire the Engineer He Needs, Here Is What He Is Doing About It. And if your objection is that the token budget alone makes this person expensive, Just Give Me the Best Model walks through why the recruiter fee on the replacement dwarfs whatever you thought you were saving.
HR will tell you the bands exist to be fair. Fair is a real value. Fair at the cost of being unable to hire or retain the people who are about to define what your engineering organization is capable of is not fair, it is just slow. The bands have to move, the leveling has to move, and the comp philosophy has to move. You are not buying a better version of 2022. You are buying the 2026 operating model, and the price of that is the price of that.
One more thing, because it will come up in your next meeting with your CFO. The person you hire at twenty or thirty percent above band, who qualifies into this role and operates at the throughput we have been describing, is not more expensive. They are the cheapest engineer on your payroll, measured the way the business actually works. The expensive engineers are the ones you are paying at band who have not qualified, because their unit of output no longer justifies the line item. Your CFO already knows this. The question is whether your comp structure is allowed to act on it.
What You Are Hiring in 2026 Is Not What You Were Hiring in 2024
Here is the other half of the standards conversation, the one that keeps CTOs up at night when they are honest about it.
The person you were fighting to recruit in 2024, the one you paid top of band for, the one your VP called “a franchise pick” on the offer call, you are now quietly second-guessing. They did not get worse. They are the same person you hired. The standard moved and they did not move with it, because nobody told them the standard had moved.
If they are not working agentically in 2026, you are looking at their last three pull requests and wondering whether the person you paid four hundred and fifty thousand dollars all-in is actually producing at the level of someone on the new bar making less. You are not going to say that out loud. You are going to say it with your calendar, by not putting them on the next strategic initiative, and they are going to feel it.
The people you are interviewing this quarter are operating differently. They ship with agents. They think in workflows rather than keystrokes. They read your job description and ask what model access you provide before they ask about comp. The gap between your 2024 hires and your 2026 candidate pool is not a skill gap in the traditional sense, it is an operating model gap, and it widened in eighteen months.
I was having coffee with my friend Jim last week. Jim is a technical recruiter for a company whose name you know (he would rather I not put it in print), and he has been placing engineers for a long time, which means he has had to throw out his mental model and rebuild it this past year. I asked him what he was seeing, and he said something I have been stealing ever since.
“The right question on a resume is not years anymore,” he said. “It is maintenance or advance.”
I asked him what he meant.
“I get four resumes with a decade of experience,” he said. “They look identical on paper. One of them has been shipping new production software continuously since day one, through a real delivery workflow, picking up whatever tooling moved the bar every year. The other three have been sitting on top of a platform somebody else built in 2014. Writing tickets. Attending meetings. Reviewing pull requests from a vendor. They all have ten years on the line. They are not the same person.”
Then he said the part that stuck with me.
“I interviewed someone two weeks ago who has been in the industry for two years, working in a place where she shipped to production from day one, using agents from the first week, never knew a workflow that was not agentic. On the dimension your hiring managers actually care about in 2026, she has more real experience than the ten-year candidate who has been in maintenance mode for the last seven. And my clients’ intake forms filter her out because she does not hit the years-of-experience minimum.”
I told him that was the whole article I had been trying to write.
Your recruiting intake is almost certainly not set up to tell the difference between advance and maintenance, which means you are filtering people out by tenure at exactly the moment tenure is telling you the least. The new bar has to distinguish advance from maintenance. Inside the building, for the people who have been here a decade. Outside the building, for the people you are trying to hire.
There is an adverse selection pattern inside this that every CTO I know has lived through at least once. The people you want to stay leave. The people you wish would leave, stay. This is not a coincidence and it is not bad luck. It is the rational outcome of a company that has not set a new bar. The strongest people in your org have options, they can read the market, and they are leaving because they can see the standard the rest of the industry is moving to and they do not want to be stuck in a place that refuses to move with it. The people who are staying are the ones whose best offer is the one they already have. Every quarter you delay setting the standard, the mix in your building gets a little worse on both ends. Nobody on your leadership team wants to say that out loud, because saying it out loud is uncomfortable for the people who are still in the room.
The Economics, and the Precedent Your Board Is Already Using
Let me do the math out loud, because every CxO reading this has a CFO on their other shoulder.
An eight hundred person engineering organization, at a loaded cost of roughly two hundred and fifty thousand dollars per person, is a two hundred million dollar annual spend. If a five person team with the right workflow produces what thirty people used to produce (and that is the number we are measuring, not the number somebody projected), you are currently paying for four to six times the engineering capacity you need to deliver your current roadmap. I am not really good at mathematics but I think that is about a hundred and fifty million dollars of capacity you could be redirecting toward things that actually grow the business, right?
You are not going to fire eighty percent of your engineers. You are going to ship four to six times the roadmap you are shipping now, in the same calendar year, against competitors who already figured this out.
Skip the standard and you do not get the multiplier. You get the tool bill and the old throughput, which is the worst outcome available to you. You have spent the money and kept the constraint.
Now the uncomfortable thing, because someone on your board is already saying it in private.
In late 2022, Elon Musk bought Twitter and cut the engineering organization by roughly eighty percent. Set aside what you think of Musk personally (the pattern does not require you to like him). The operational fact is that a company with seven and a half thousand employees went to about fifteen hundred and the service did not fall over. That was pre-AI. No agents, no AI SDLC, none of the tooling every company has access to now. Just a willingness to remove the layers of coordination the old operating model required and run the company at a headcount that looked absurd from the outside.
If the Musk example is the one that closes tabs, there are others. Meta’s “year of efficiency” in 2023 removed roughly twenty-one thousand roles, the stock repriced, the product shipped. Shopify sent an internal memo the same year that said any new headcount request had to start by proving an agent could not do the job, and they continued cutting into the existing base. GE, thirty years earlier, spent a decade pulling out management layers because the layers were the thing slowing decisions down. These are four different companies, four different CEOs, four different industries, and one pattern. The coordination layer was doing less real work than the headcount said it was, and the organization ran when the layer came out.
Your board watched those happen. Your investors watched them happen. The CEOs on the boards of the other companies your board members sit on watched them happen, and every one of them filed the pattern away as a data point. The question they are quietly asking now, with actual agentic tooling available to every company on earth, is how long until they can expect the same from you. Not eighty percent. Maybe not anywhere near eighty percent. But something in that direction, delivered on a timeline that is no longer generous.
You can be the executive who gets in front of that conversation by setting the standard, reshaping the org, and showing the board a deliberate move toward an operating model that actually uses what is available. Or you can be the executive who gets the question across a boardroom table in twelve months and does not have an answer ready. Both of those jobs exist. Only one of them is still yours in 2028.
The Conversation You Are Avoiding
The reason you have not set the standard is not that you do not know you need to. You know. You read the articles. You sent one to your VP of Engineering last week.
The reason you have not set it is that setting it means telling some of the people who built this company that the thing they were great at for fifteen years is not the thing that is great anymore. That conversation is hard. It should be hard. You hired these people, some of them built what the company sells, and you owe them a real onramp with a real runway and a real shot at qualifying into the new role. Most of them will take it. A few will surprise you. A handful will leave, and you will miss them.
You also owe the standard to the rest of the organization. The people who are ready, who have been waiting for the bar to come up, who are watching you decide whether this company is still the place they should be spending the next five years of their career. If you do not set the standard, you are telling them the company has decided to coast.
Coasting is not on the menu. The companies coasting in 2026 are the acquisition targets of 2028.
The Move This Quarter
You do not need another pilot. You do not need a capability matrix, and you do not need a consultant to tell you which tools to standardize on.
You need to write the new engineering role in your own words, on a single page. Define the qualification in terms someone can pass or fail. Pick the first cohort (your strongest people, not your most available), and send the note that tells the organization the boats are burning.
Ninety days later you will know whether you have an organization that can compete in 2028, or whether you are watching one self-measure its way into a slow exit.
How do you set the new standard for working at your company, and what happens to the organization if you decide, one more quarter, to wait?
