23 min read
I am having the same conversation three or four times a week now. A director, a VP of Engineering, sometimes a CTO. My friends who are now leaders, and leaders who are now friends. The technology part of the conversation is over. They get it. They understand that AI agents change how software gets built, that team structures need to shrink, that the old model where headcount equals output is broken. They are not confused about the engineering side.
They are stuck on two things. And until they fix both, they are not AI-native. They are AI-assisted with a nicer story.
The first is the staffing model. Finance and HR are still operating like every line of code needs to be written by a human, so the rational move is to find the cheapest human who can type. That made sense in 2018, when the work was labor-intensive and the only way to get more output was to hire more hands. AI changed the economics faster than those two functions updated their models, and now two of the most important departments in your company are optimizing for a world that no longer exists.
The second is the governance model. The workflows, the approval chains, the review gates, the compliance controls that your organization runs on today were all designed around the assumption that humans write every line of code by hand. AI inverted the economics of code production, but nobody rebuilt the governance to match. Your team bolted agents onto a 2019 process framework and called it transformation.
You cannot be AI-native without fixing both. This article is about what that actually looks like. But before I get into the model, I need to talk about the people.
The People Conversation
I rewrite this section every time I publish a version of this argument. I never feel like I get it right, and I think that is because there is no way to get it right. You are talking about people’s livelihoods, and any framework you put around that will feel insufficient to the person on the other side of the table.
Most readers also read: Stop Reviewing Code. Start Proving It Works. My Take on AI in the Quality Process of Software.
But I keep trying.
These are people’s careers. Real ones. A QA lead I worked with years ago spent eight years building a testing practice from nothing at a company that had never had one. She built the automation suite, she trained the junior testers, she fought for budget every quarter, and she won. An engineering manager I know stayed up until midnight before every release because she cared more about her team shipping clean than she cared about her own sleep. A scrum master I respected genuinely believed the process was helping his teams deliver, and for a long time he was right.
I think about those people when I write about this shift. If you do not think about them, you should not be leading through it.
I think about my former boss Deb. I was a contractor, and when the budget dried up she did not wait until the last week to tell me. She sat me down four months before the end and said, “I have fought for your renewal and I cannot get it done. You have four months.” I was not angry. I had four months to find a better role, and I did. Deb was my reference. She is still someone I respect deeply, because she led with honesty when it would have been easier to delay the conversation and hope something changed.
That is what leading with empathy looks like. It is not softening the message. It is giving people time.
I look at what happened at companies like Box and others that went through large restructurings. Was it good or bad? I honestly do not know. But I know the ones who offered twenty weeks of severance and continued health insurance treated their people like adults, and the ones who did it by Zoom with a locked laptop treated them like liabilities. The decision might be the same. How you execute it is the difference between someone who lands on their feet and someone who never trusts a manager again.
I am not here to justify corporate decisions. But the competition is changing, and the markets are changing, and that needs to be addressed honestly. Careers have been disrupted before and they will be disrupted again. Coal shovelers on trains were essential until diesel engines arrived. Chief power officers ran entire departments managing electricity generation for factories until the grid made the role obsolete. The milkman was a daily fixture in American life until refrigeration and grocery stores changed the economics. Nobody thinks those transitions were wrong in hindsight. But every one of them involved real people who built real skills around a model that stopped being the model.
The question is not whether this transition happens. It is whether you lead through it like Deb, with honesty and enough runway for people to land well, or whether you wait until the budget is already gone and hand someone a box on a Friday.
Kindness is not pretending the change is not happening. The kindest leader I have seen in a restructuring sat down with each person on the affected team, one at a time, and said something like: “Here is what is changing, and here is why. Here is what I think your path forward looks like. And here is the time and support I am putting behind it.” Some of those conversations went well. Some of them did not. But every single person on that team told me afterward that they respected the honesty, even the ones who were angry about the outcome.
You owe your people four things. Clarity about what the new standard looks like. Real time to meet it (dedicated hours every week, not thirty minutes scraped from sprint margins). Honest assessment of where they stand today. And a path forward for everyone, even if that path eventually leads outside the team.
What you do not owe them is pretending the standard has not moved. That is not kindness. That is avoidance, and it costs them the months they needed to adapt.
The engineers who will struggle are not bad engineers. I want to be clear about that. Many of them were exactly what you needed five years ago. They learned the patterns the job required, they shipped reliably inside the system you built for them, and they did honest work for honest pay. The system changed underneath them. They did not create that system and they did not change it. You owe them more than a PIP and a timeline. You owe them a real training investment, an honest assessment, and enough lead time to build the skills the new model requires. Some will make the transition and surprise everyone, including themselves. Some will find that their talents fit better in a different kind of role. Both outcomes are fine if you gave them a fair shot.
I wrote about the full framework for having this conversation and about what happens when you give your engineers thirty minutes a week to learn and then wonder why adoption stalls. That is not their failure. It is yours.
Now. With that said, let me show you what the model looks like.
Four Craftsmen or Twenty Kids With Hammers
The first home my parents ever bought was built by the shop class at North Adams High School. The home economics class decorated it. My parents were thrilled. It was a bargain, and they were young, and a house was a house.
It was well built, too. Over enough time and enough hands, with a teacher supervising every cut, those students put together a solid home. It still stands today.
But it took an entire school year. The teacher did the critical work himself or stood over the student who did it. The pace was dictated by the learning objectives, not the homeowner’s move-in date. And it was one house. That is a fine way to teach. It is no way to run a professional contracting organization.
Now scale the thought experiment. You need to build a ranch home. Three bedrooms, two baths. A boring, profitable, get-it-done-right house.
Option one: four tenured craftsmen. Fifteen to twenty years of experience each. They know framing, electrical, plumbing, finish work because they have done it wrong, fixed it, and done it right so many times the knowledge lives in their hands. They talk directly to the homeowner, the inspector, the architect. No interpreter needed.
Option two: twenty eighteen-year-olds. Fresh out of a six-week training program. You hire a site supervisor, a foreman for every five kids, a safety coordinator, and a project manager to keep the foremen from stepping on each other.
I am not picking on the eighteen-year-olds. I was one. Most of the craftsmen I know were one. Some of those kids are going to be great. The kid in the corner who reads the blueprint before anyone tells him to, who stays late because he wants to understand how the framing connects to the foundation? He is going to be a craftsman in five years. Every principal on every job site started as that kid.
The issue is the orchestration cost. Twenty people need coordination, supervision, sequencing, safety protocols, and a foreman who spends more time teaching than building. None of that is a criticism of who they are. It is a description of what it costs to organize twenty people who are still learning.
The overhead is the problem, not the people.
Agents do the gopher work now. Agents carry the lumber, hang the drywall, run the cable through the studs. But do you want an eighteen-year-old wiring the power main? Do you want someone who has never graded a foundation behind the wheel of a bulldozer?
The agents handle the volume. The principals handle the judgment. And that kid in the corner, the one who stays late? You invest in him. You pair him with a principal. You give him the years he needs to become one. That is a different conversation than staffing him onto a twenty-person crew and calling it a development program.
Your $100K engineer might be the most expensive spend in that role, not because their salary is high, but because when you average the total cost per head across the twenty-person team (salaries, managers, coordinators, rework, delay), your cheapest-looking team turns out to be your most expensive one.
My parents’ house turned out fine. But nobody pretended that was a scalable model. Yet every enterprise in America bets their software on exactly this structure for line-of-business applications that are not skyscrapers. They are ranch homes.
(If you are reading this and thinking “where do I find four principals?” you are asking the right question. I wrote about that in He Cannot Hire the Engineer He Needs.)
Your Line-of-Business Apps Are Ranch Homes
Most of the software your company builds is not innovative. It should not be. Your claims processing system, your internal dashboard, your customer portal, your partner onboarding workflow. These are ranch homes. They need to be shipped, not reimagined.
And you are building them with twenty-person teams and six months of ceremony. A spec. A design review. Sprint planning. Estimation. A demo. A retro. A hardening sprint. A release candidate. A staging environment. A production release window.
For a ranch home.
Four principals with AI agents build that same application in weeks. A principal reads the business rules, builds a working prototype with an agent by end of day, and walks it over to the business owner the next morning. “Is this what you meant?” “Almost, change the approval threshold and add an exception for international orders.” Done by lunch.
That conversation, the one between the craftsman and the homeowner, is where all the value lives. Every person you put between those two people adds cost and removes fidelity.
What Does an AI-Native Team Cost?
A senior principal costs $250,000 to $350,000 fully loaded. Four of them cost $1.2 million a year. Your CFO will flinch.
Now price the alternative. Twenty junior-to-mid engineers at $120,000 fully loaded: $2.4 million. Three engineering managers at $200,000: $600,000. Two scrum masters at $130,000: $260,000. A QA team of four at $110,000: $440,000. A project manager at $150,000. A technical architect at $250,000 who spends half their time in governance meetings. That is $4.1 million.
The twenty-person team ships one line-of-business application in six months. The four principals ship it in weeks and start the next one. Your four most expensive engineers are your cheapest team. The $300K principal is your most cost-effective employee, not despite their salary but because of it.
I wrote about the HR side of this and about what happens when your CFO drives the AI tool decision. I also wrote about how to negotiate the comp conversation for the leaders who are stuck in a comp committee fight right now, trying to explain to HR why a $350K IC band is not a precedent-setting anomaly but the new cost of staying competitive. That fight is real, it is political, and it is the single most common place I see this initiative die before it starts.
We Already Did This
I am not asking you to take this on faith. Nathan did it.
Nathan joined a B2B SaaS company in the $5M to $15M ARR range as CTO. The plan from ownership was straightforward: hire eight to ten engineers, build a proper team. He looked at that plan, then he threw it out. He did not hire a single person. He kept one existing associate engineer and took direct ownership of the technical direction. Two people. A CTO and an associate engineer. For a company doing millions in revenue with a platform that needed to be dismantled and rebuilt.
In twelve months, Nathan and that one associate decomposed the monolith, automated deployments, stood up CI/CD, and outshipped the original ten-person plan. The cost was a fraction of what ten engineers would have run. The quality was higher because the person making architectural decisions was the same person writing the code, and he had twenty years of production experience backing every choice.
Nathan is now a technical leader at Agent Driven Development. I wrote the full story in Customer Zero: Two Engineers, One Year, More Output Than Ten. The numbers are real. The company is real. The output is documented.
That was one data point with two people. The model I am describing here is four. The economics only improve as agents get better, and they are getting better every quarter.
The Governance Model Changes or Nothing Changes
This is the most important section in this article, and it is the one most people will want to skip because it is not as exciting as the math. Read it anyway.
Every CISO, every compliance officer, every auditor who reads the sections above is going to ask the same question: “Who owns the governance when you have stripped out every layer that used to handle it?”
The AI-native team does. The four principals own it, define it, and evolve it as the tooling and the regulatory landscape change. That is not a gap in the model. It is the model.
You cannot just bolt agents onto an existing governance framework and call it done. The workflows, the approval chains, the review gates, the separation of duties, all of it was built around an assumption that code is expensive to produce and cheap to review. AI inverted that assumption. Code is now cheap to produce and expensive to review well. The old governance framework cannot account for that inversion because it was never designed for a world where an agent generates three thousand lines in an afternoon and a human has to verify every one of them.
The governance has to be rebuilt for the world that actually exists. An AI-native team does that rebuilding. They define what “reviewed” means when an agent writes the code. They define what “tested” means when an agent generates the test suite. They build the audit trail into the tooling itself, not into a meeting cadence or a spreadsheet someone fills out after the fact. They automate the compliance artifacts that used to take a human three days to assemble. And because the governance itself is agent-assisted, they make it faster and more rigorous than what it replaced.
If your governance model is still the one you inherited from 2019, you are not AI-native. You are AI-assisted in a costume. Your engineers added agents to their IDEs, but the organization around them still operates like every commit needs a human chain of custody designed for a world where humans were the only ones committing.
The principals do not skip governance. They rebuild it. Your compliance team sets the requirements (that is their job and it should stay their job). The principals build the system that meets those requirements. That is the correct separation of duties for an AI-native world.
I will say this plainly: if your four principals cannot articulate your compliance framework, cannot explain how their workflow satisfies your regulatory obligations (whether that is SOX, PCI, HIPAA, or something else), and cannot show an auditor the automated trail from customer requirement to deployed code, they are not principals. They are senior engineers with expensive salaries. The governance capability is not optional. It is the qualification bar.
What Should You Hire For on an AI-Native Team?
Two years ago you hired for language fluency. Can they write React. Can they write Go. Do they know Kubernetes. That question is almost irrelevant now. An agent can produce code in any stack. The question is whether the human directing it knows what good looks like, and whether they know it deeply enough to catch the things the agent gets wrong.
Software engineering and system design. This is still the first qualification and it is not negotiable. An agent can generate code. It cannot design a system. It does not know where the boundaries belong between services, or what data model will survive the next three years of business changes, or why the last team’s architecture collapsed under load at 2 AM on a Thursday. That knowledge comes from building systems, operating them in production, and learning from the ones that failed. If someone cannot whiteboard a system, reason through its failure modes, and explain the tradeoffs they made without referencing what an agent suggested, they are not a principal. They are a prompt operator. You can hire prompt operators for considerably less than $300K. I wrote about why your codebase itself needs to be structured for agents to work within it, and the answer starts with the humans who design it.
Context architecture. Can this person take a complex business domain (your payment system, your claims process, your supply chain) and break it into pieces an agent can work within? The engineer who can explain the payment system’s rules in three sentences directs an agent that builds the right thing. The one who cannot gets code that compiles and fails the first time a real user touches it.
Specification skill. Can they externalize their thinking with enough precision that another intelligence, human or AI, could execute it without a follow-up conversation? Most engineers never had to do this. They held the requirements in their head and typed them directly into the codebase. That worked when the only consumer of their intent was their own hands.
Judgment under speed. An agent produces code fast. The engineer who cannot evaluate that output just as fast becomes the bottleneck. You need people who can read a function, spot the edge case the agent missed, identify the security vulnerability it introduced, and decide whether to fix or rewrite, all in minutes. You do not train that in a workshop. It comes from years of watching systems break and getting paged when they do.
Governance instinct. Can they build the compliance framework, not just follow one that somebody else wrote? Can they design an audit trail, not just generate an artifact? This is the qualification that did not exist two years ago, and it is the one that separates a principal from a senior engineer who happens to be good with agents.
Intellectual honesty. The most dangerous engineer on an AI-native team is the one who accepts agent output without understanding it. You need people who say “I do not understand what this does yet” instead of “it passes the tests, ship it.” Hire people who tell you what they do not know. That is not a weakness. It is the instinct that keeps agent-generated code from becoming agent-generated liability.
The Operating Model
Product managers build POCs, not specs. A PM talks to a customer, builds a working proof of concept with an agent, iterates with two customers, and hands engineering something validated. I wrote about this in the spec is dead and the POC replaced it.
Engineers harden, not greenfield. The engineer’s job shifts from “interpret a spec and build” to “take working software and make it production-grade.” Error handling, load resilience, security review, monitoring. That is a more demanding job than the old one.
Testing is the Testing Square. The testing pyramid was a financial compromise. Agents removed the cost constraint. An AI-native team runs equal investment across unit, integration, end-to-end, and contract tests. I wrote about why the pyramid is obsolete.
Tooling is standardized. One agent platform. One set of guardrails. Your best salesperson did not pick Salesforce. Your CRO did. I wrote about why the same logic applies to AI tools.
The Leadership Model
Every executive I walk through this model asks the same question, and it reveals how deeply the old structure is wired into their thinking.
“If I have four principals, who manages them?”
Nobody manages them. That is the point. What they need is a connector. In the house analogy, this is the general contractor. Not the person who tells the electrician how to wire a panel (the electrician knows how to wire a panel). The general contractor makes sure the right trades show up in the right order, that the permit is pulled, and that the homeowner’s change orders get communicated before the drywall goes up. They coordinate sequence and priority. They do not supervise technique.
That person might be a VP of Engineering who still builds, or a CTO who has not retreated entirely into strategy decks, or a principal among the four who rotates into the connector role because they happen to be good at translating between business language and technical reality.
What that person is not: a traditional people manager running weekly one-on-ones about career ladders and tracking utilization. And while I am on the subject, the weekly one-on-one has become the code review of HR. In engineering, we held onto manual code review for years after automated testing and linting made most of it redundant, because it felt like the responsible thing to do. I wrote about why that instinct needs to evolve. The weekly one-on-one is the same pattern in a different department. It was designed for a world where the manager needed to stay close to the work because the people doing it were still learning how. For four self-directed principals who have been shipping software longer than some of your managers have been in the industry, a standing weekly meeting where someone asks “how are things going” and “is there anything blocking you” is the organizational equivalent of a senior architect submitting a pull request so a mid-level can approve it.
For most of your organization, the one-on-one still serves a real purpose. For these four people, the structure is backwards. Replace it with what they actually need: a regular conversation with the business about what matters most, and the trust to go build it.
The 90-Day Proof
You have seen ninety-day plans before. This is not a transformation roadmap. It is a proof-of-concept for a new operating model that either validates the approach or kills it.
Days 1-30: Map your real value streams. Pick your AI tooling stack. Identify four principals who demonstrate judgment, specification skill, deep domain knowledge, and the governance instinct to own their compliance workflow. Give them a real line-of-business application with real users.
Days 31-60: The four principals ship production software. Measure everything, cycle time, defect rate, cost per feature, and compare it honestly to what your traditional team produces. Start the people assessment across the broader organization. Give every engineer four hours a week of real learning time.
Days 61-90: Propagate the data. Stand up a second team. Not everyone will qualify on the first pass. That is expected and it is not a verdict on their worth. Some of the people who do not qualify in month two will qualify in month six if you gave them the learning time you promised.
By day 90 you will have proven that four people with deep experience, proper tooling, direct access to the business stakeholder, and ownership of their governance model will outperform twenty people with a management structure and an inherited process framework. Once that proof exists, the conversation shifts from whether to change to how to change in a way that respects the people who got you here.
If the numbers do not materialize by day 60, kill it. That is the exit ramp. The 90-day proof is designed to fail fast and fail cheap before you commit organizational capital to a restructuring you cannot reverse. The pilot team is four people and one application. The blast radius is contained.
The Questions You Should Be Asking
I have shared this model with enough leaders to know which questions come up in the first ten minutes. Let me answer them honestly rather than pretend they do not exist.
“What happens when one of the four leaves?”
This is the bus factor question and it is the right one to ask. Four people is a concentration risk. I will not pretend otherwise. But here is what I have observed: the bus factor fear assumes that institutional knowledge lives in people’s heads and nowhere else. In the old model, that was true, because the process did not require anyone to externalize their thinking. Principals working with agents externalize everything. The specifications, the architectural decisions, the business rules, the compliance rationale, all of it is written down because agents need it written down to function. The documentation is a byproduct of the workflow, not an afterthought somebody files after the sprint.
That does not make the risk zero. It makes the risk manageable. You mitigate it the same way a good general contractor does: you pay well, you treat them with respect, you give them interesting work, and you do not pretend they are replaceable by someone cheaper. I wrote about what the comp conversation looks like and about the real cost of trying to find these people on the open market. Retention is not a mystery. Principals stay where the work is real, the autonomy is genuine, and the bureaucracy is low.
“Who carries the pager?”
Four principals means a four-person on-call rotation. That is tight. You are right to flag it. But consider what you are comparing it to: a twenty-person team where the same two seniors get paged every time because nobody else understands the system well enough to fix it at 2 AM. I have seen that pattern in every large org I have worked with. The nominal rotation is twenty people. The effective rotation is two or three. Four principals who all understand the entire system is a better on-call posture than twenty engineers where three of them carry the real weight.
As the portfolio grows and you stand up second and third teams, the on-call capacity grows with it. The key is that every principal on every team can support the systems they built because they built them with agents that documented the decisions along the way.
“What does the transition actually cost?”
I am not going to pretend the transition is free. During the 90-day proof you are running both models simultaneously: your existing team continues to operate while the four principals prove the new model. That is a double-carry on your P&L for one quarter. If the proof works and you decide to restructure, you are looking at severance costs (budget three to six months per affected employee depending on tenure and jurisdiction), recruiting costs for principals you do not currently have (figure $50K to $80K per head through a specialized recruiter or six months of internal sourcing), and a productivity valley during knowledge transfer that lasts two to three months.
Add it up and the transition costs real money, probably $1M to $2M for a twenty-person team restructuring, depending on your comp bands and your jurisdiction. That is not nothing. But measure it against the $2.9M annual savings and the 6x throughput improvement, and the payback period is under a year. Your CFO has approved worse bets with longer payback.
I wrote about mapping the value stream to see where the real waste is. That exercise, done honestly, usually makes the transition cost argument for you.
“This does not apply to everything we build.”
Correct. I am talking about line-of-business applications. Ranch homes. Your core platform, your ML infrastructure, your real-time trading system, your mission-critical systems with fifteen years of regulatory accretion, those are not ranch homes. They are hospitals and bridges, and they require a different conversation about team structure, risk tolerance, and regulatory complexity. I wrote about what happens when AI meets a monolith that was never designed for it and how to test whether a consultant’s AI modernization plan is real or theater.
Most enterprises have dozens of ranch homes for every hospital. Start with the ranch homes. The proof compounds.
Two Tests
If you take nothing else from this article, take these two tests. Apply them honestly to your organization, and they will tell you whether you are AI-native or wearing a costume.
Test one: the staffing model. Look at your line-of-business application teams. Are you running four principals who talk directly to the business? Or are you running twenty juniors with a management layer that exists to coordinate people who need coordinating? If your Finance team is still optimizing for the cheapest cost per head and your HR team is still benchmarking against 2018 comp data, you are not AI-native. You are buying bargain typists for work that no longer requires typing.
Test two: the governance model. Does your team own their workflow, their compliance controls, their audit trail, and their definition of what “reviewed” and “tested” and “approved” mean in an agent-driven world? Or are they running agents inside a governance framework that was designed for humans writing every line by hand? If your governance model predates your AI adoption, you are not AI-native. You are AI-assisted with the same bureaucracy and a faster text editor.
Both tests have to pass. A team of four principals running inside 2019 governance is expensive talent slowed down by inherited process. A team of twenty juniors with a new governance model is the same coordination overhead with fancier compliance artifacts. You need the right people and the right governance together, or neither one works.
I want to close with something that is easy to forget when you are reading about operating models and cost structures. Every person on your current team showed up because you hired them. They moved their families. They turned down other offers. They built their careers around the structure you created. If you are going to change that structure, and I believe you should, you owe them the same respect you would want if someone changed the rules of your job while you were doing it well.
Do it. Do it soon. But do it like someone who remembers what it felt like to be the new kid on the job site, hoping the foreman would teach you something worth knowing.
Is your organization passing both tests today? And if it is not, what happens to the people, the budgets, and the competitive position you are protecting while you wait for Finance and HR to catch up?
