Skip to content

Podcast Transcript

How to Build an AI-Native Engineering Team (Not an AI-Assisted One)

Executive DeckListen
March 19, 2026

·

Read the full article

I am having the same conversation three or four times a week now. A director, a Vice President of Engineering, sometimes a Chief Technology Officer. My friends who are now leaders, and leaders who are now friends. The technology part of the conversation is over. They get it. They understand that AI agents change how software gets built, that team structures need to shrink, and that the old model where headcount equals output is broken. They are not confused about the engineering side.

Right. They are stuck on two things. And until they fix both, they are not AI-native. They are AI-assisted with a nicer story.

The first is the staffing model. Finance and Human Resources are still operating like every line of code needs to be written by a human, so the rational move is to find the cheapest human who can type. That made sense in twenty eighteen, when the work was labor-intensive and the only way to get more output was to hire more hands. AI changed the economics faster than those two functions updated their models, and now two of the most important departments in your company are optimizing for a world that no longer exists.

The second is the governance model. The workflows, the approval chains, the review gates, and the compliance controls that your organization runs on today were all designed around the assumption that humans write every line of code by hand. AI inverted the economics of code production, but nobody rebuilt the governance to match. Your team bolted agents onto a twenty nineteen process framework and called it transformation.

Look. You cannot be AI-native without fixing both. I want to talk about what that actually looks like. But before I get into the model, I need to talk about the people.

Now. Let's talk about the people conversation. I rewrite this argument every time I publish a version of it. I never feel like I get it right, and I think that is because there is no way to get it right. You are talking about people's livelihoods, and any framework you put around that will feel insufficient to the person on the other side of the table.

But I keep trying.

These are people's careers. Real ones. A Quality Assurance lead I worked with years ago spent eight years building a testing practice from nothing at a company that had never had one. She built the automation suite, she trained the junior testers, she fought for budget every quarter, and she won. An engineering manager I know stayed up until midnight before every release because she cared more about her team shipping clean than she cared about her own sleep. A scrum master I respected genuinely believed the process was helping his teams deliver, and for a long time he was right.

I think about those people when I write about this shift. If you do not think about them, you should not be leading through it.

I think about my former boss Deb. I was a contractor, and when the budget dried up she did not wait until the last week to tell me. She sat me down four months before the end and said, I have fought for your renewal and I cannot get it done. You have four months. I was not angry. I had four months to find a better role, and I did. Deb was my reference. She is still someone I respect deeply, because she led with honesty when it would have been easier to delay the conversation and hope something changed.

That is what leading with empathy looks like. It is not softening the message. It is giving people time.

I look at what happened at companies like Box and others that went through large restructurings. Was it good or bad? I honestly do not know. But I know the ones who offered twenty weeks of severance and continued health insurance treated their people like adults, and the ones who did it by video call with a locked laptop treated them like liabilities. The decision might be the same. How you execute it is the difference between someone who lands on their feet and someone who never trusts a manager again.

I am not here to justify corporate decisions. But the competition is changing, and the markets are changing, and that needs to be addressed honestly. Careers have been disrupted before and they will be disrupted again. Coal shovelers on trains were essential until diesel engines arrived. Chief power officers ran entire departments managing electricity generation for factories until the grid made the role obsolete. The milkman was a daily fixture in American life until refrigeration and grocery stores changed the economics. Nobody thinks those transitions were wrong in hindsight. But every one of them involved real people who built real skills around a model that stopped being the model.

The question is not whether this transition happens. It is whether you lead through it like Deb, with honesty and enough runway for people to land well, or whether you wait until the budget is already gone and hand someone a box on a Friday.

Kindness is not pretending the change is not happening. The kindest leader I have seen in a restructuring sat down with each person on the affected team, one at a time, and said something like: Here is what is changing, and here is why. Here is what I think your path forward looks like. And here is the time and support I am putting behind it. Some of those conversations went well. Some of those did not. But every single person on that team told me afterward that they respected the honesty, even the ones who were angry about the outcome.

You owe your people four things. First, clarity about what the new standard looks like. Second, real time to meet it, meaning dedicated hours every week, not thirty minutes scraped from sprint margins. Third, an honest assessment of where they stand today. And fourth, a path forward for everyone, even if that path eventually leads outside the team.

What you do not owe them is pretending the standard has not moved. That is not kindness. That is avoidance, and it costs them the months they needed to adapt.

The engineers who will struggle are not bad engineers. I want to be clear about that. Many of them were exactly what you needed five years ago. They learned the patterns the job required, they shipped reliably inside the system you built for them, and they did honest work for honest pay. The system changed underneath them. They did not create that system and they did not change it. You owe them more than a Performance Improvement Plan and a timeline. You owe them a real training investment, an honest assessment, and enough lead time to build the skills the new model requires. Some will make the transition and surprise everyone, including themselves. Some will find that their talents fit better in a different kind of role. Both outcomes are fine if you gave them a fair shot.

I have written about the full framework for having this conversation and about what happens when you give your engineers thirty minutes a week to learn and then wonder why adoption stalls. That is not their failure. It is yours.

Now. With that said, let me show you what the model looks like.

The model comes down to a choice. Four craftsmen or twenty kids with hammers. The first home my parents ever bought was built by the shop class at North Adams High School. The home economics class decorated it. My parents were thrilled. It was a bargain, and they were young, and a house was a house.

It was well built, too. Over enough time and enough hands, with a teacher supervising every cut, those students put together a solid home. It still stands today.

But it took an entire school year. The teacher did the critical work himself or stood over the student who did it. The pace was dictated by the learning objectives, not the homeowner's move-in date. And it was one house. That is a fine way to teach. It is no way to run a professional contracting organization.

Now scale the thought experiment. You need to build a ranch home. Three bedrooms, two baths. A boring, profitable, get-it-done-right house.

Option one: four tenured craftsmen. Fifteen to twenty years of experience each. They know framing, electrical, plumbing, and finish work because they have done it wrong, fixed it, and done it right so many times the knowledge lives in their hands. They talk directly to the homeowner, the inspector, and the architect. No interpreter needed.

Option two: twenty eighteen-year-olds. Fresh out of a six-week training program. You hire a site supervisor, a foreman for every five kids, a safety coordinator, and a project manager to keep the foremen from stepping on each other.

I am not picking on the eighteen-year-olds. I was one. Most of the craftsmen I know were one. Some of those kids are going to be great. The kid in the corner who reads the blueprint before anyone tells him to, who stays late because he wants to understand how the framing connects to the foundation? He is going to be a craftsman in five years. Every principal on every job site started as that kid.

The issue is the orchestration cost. Twenty people need coordination, supervision, sequencing, safety protocols, and a foreman who spends more time teaching than building. None of that is a criticism of who they are. It is a description of what it costs to organize twenty people who are still learning.

The overhead is the problem, not the people.

Agents do the gopher work now. Agents carry the lumber, hang the drywall, and run the cable through the studs. But do you want an eighteen-year-old wiring the power main? Do you want someone who has never graded a foundation behind the wheel of a bulldozer?

The agents handle the volume. The principals handle the judgment. And that kid in the corner, the one who stays late? You invest in him. You pair him with a principal. You give him the years he needs to become one. That is a different conversation than staffing him onto a twenty-person crew and calling it a development program.

Your one hundred thousand dollar engineer might be the most expensive spend in that role, not because their salary is high, but because when you average the total cost per head across the twenty-person team, including salaries, managers, coordinators, rework, and delay, your cheapest-looking team turns out to be your most expensive one.

My parents' house turned out fine. But nobody pretended that was a scalable model. Yet every enterprise in America bets their software on exactly this structure for line-of-business applications that are not skyscrapers. They are ranch homes.

Your line of business applications are ranch homes. Most of the software your company builds is not innovative. It should not be. Your claims processing system, your internal dashboard, your customer portal, and your partner onboarding workflow. These are ranch homes. They need to be shipped, not reimagined.

And you are building them with twenty-person teams and six months of ceremony. A specification. A design review. Sprint planning. Estimation. A demo. A retrospective. A hardening sprint. A release candidate. A staging environment. A production release window.

All for a ranch home.

Four principals with AI agents build that same application in weeks. A principal reads the business rules, builds a working prototype with an agent by end of day, and walks it over to the business owner the next morning. Is this what you meant? Almost, change the approval threshold and add an exception for international orders. Done by lunch.

That conversation, the one between the craftsman and the homeowner, is where all the value lives. Every person you put between those two people adds cost and removes fidelity.

So. What does an AI-native team cost? A senior principal costs two hundred fifty thousand dollars to three hundred fifty thousand dollars fully loaded. Four of them cost one point two million dollars a year. Your Chief Financial Officer will flinch.

Now price the alternative. Twenty junior-to-mid engineers at one hundred twenty thousand dollars fully loaded is two point four million dollars. Three engineering managers at two hundred thousand dollars is six hundred thousand dollars. Two scrum masters at one hundred thirty thousand dollars is two hundred sixty thousand dollars. A quality assurance team of four at one hundred ten thousand dollars is four hundred forty thousand dollars. A project manager at one hundred fifty thousand dollars. A technical architect at two hundred fifty thousand dollars who spends half their time in governance meetings. That is four point one million dollars.

The twenty-person team ships one line-of-business application in six months. The four principals ship it in weeks and start the next one. Your four most expensive engineers are your cheapest team. The three hundred thousand dollar principal is your most cost-effective employee, not despite their salary but because of it.

I have written about the Human Resources side of this and about what happens when your Chief Financial Officer drives the AI tool decision. I also wrote about how to negotiate the compensation conversation for the leaders who are stuck in a committee fight right now, trying to explain to Human Resources why a three hundred fifty thousand dollar Individual Contributor band is not a precedent-setting anomaly but the new cost of staying competitive. That fight is real, it is political, and it is the single most common place I see this initiative die before it starts.

We already did this. I am not asking you to take this on faith. Nathan did it.

Nathan joined a business-to-business Software as a Service company in the five million to fifteen million dollar Annual Recurring Revenue range as Chief Technology Officer. The plan from ownership was straightforward: hire eight to ten engineers, build a proper team. He looked at that plan, then he threw it out. He did not hire a single person. He kept one existing associate engineer and took direct ownership of the technical direction. Two people. A Chief Technology Officer and an associate engineer. For a company doing millions in revenue with a platform that needed to be dismantled and rebuilt.

In twelve months, Nathan and that one associate decomposed the monolith, automated deployments, stood up Continuous Integration and Continuous Deployment, and outshipped the original ten-person plan. The cost was a fraction of what ten engineers would have run. The quality was higher because the person making architectural decisions was the same person writing the code, and he had twenty years of production experience backing every choice.

Nathan is now a technical leader at Agent Driven Development. The numbers are real. The company is real. The output is documented.

That was one data point with two people. The model I am describing here is four. The economics only improve as agents get better, and they are getting better every quarter.

Now. The governance model changes or nothing changes. This is the most important part of the conversation, and it is the one most people will want to skip because it is not as exciting as the math. Listen anyway.

Every security chief, every compliance officer, and every auditor who hears this is going to ask the same question: Who owns the governance when you have stripped out every layer that used to handle it?

The AI-native team does. The four principals own it, define it, and evolve it as the tooling and the regulatory landscape change. That is not a gap in the model. It is the model.

You cannot just bolt agents onto an existing governance framework and call it done. The workflows, the approval chains, the review gates, and the separation of duties were all built around an assumption that code is expensive to produce and cheap to review. AI inverted that assumption. Code is now cheap to produce and expensive to review well. The old governance framework cannot account for that inversion because it was never designed for a world where an agent generates three thousand lines in an afternoon and a human has to verify every one of them.

The governance has to be rebuilt for the world that actually exists. An AI-native team does that rebuilding. They define what reviewed means when an agent writes the code. They define what tested means when an agent generates the test suite. They build the audit trail into the tooling itself, not into a meeting cadence or a spreadsheet someone fills out after the fact. They automate the compliance artifacts that used to take a human three days to assemble. And because the governance itself is agent-assisted, they make it faster and more rigorous than what it replaced.

If your governance model is still the one you inherited from twenty nineteen, you are not AI-native. You are AI-assisted in a costume. Your engineers added agents to their editors, but the organization around them still operates like every commit needs a human chain of custody designed for a world where humans were the only ones committing.

The principals do not skip governance. They rebuild it. Your compliance team sets the requirements. That is their job and it should stay their job. The principals build the system that meets those requirements. That is the correct separation of duties for an AI-native world.

I will say this plainly. If your four principals cannot articulate your compliance framework, cannot explain how their workflow satisfies your regulatory obligations, whether that is the Sarbanes Oxley Act, the Payment Card Industry standards, or the Health Insurance Portability and Accountability Act, and if they cannot show an auditor the automated trail from customer requirement to deployed code, they are not principals. They are senior engineers with expensive salaries. The governance capability is not optional. It is the qualification bar.

So. What should you hire for on an AI-native team? Two years ago you hired for language fluency. Can they write React. Can they write Go. Do they know Kubernetes. That question is almost irrelevant now. An agent can produce code in any stack. The question is whether the human directing it knows what good looks like, and whether they know it deeply enough to catch the things the agent gets wrong.

First, software engineering and system design. This is still the first qualification and it is not negotiable. An agent can generate code. It cannot design a system. It does not know where the boundaries belong between services, or what data model will survive the next three years of business changes, or why the last team's architecture collapsed under load at two A M on a Thursday. That knowledge comes from building systems, operating them in production, and learning from the ones that failed. If someone cannot whiteboard a system, reason through its failure modes, and explain the tradeoffs they made without referencing what an agent suggested, they are not a principal. They are a prompt operator. You can hire prompt operators for considerably less than three hundred thousand dollars.

Second, context architecture. Can this person take a complex business domain like your payment system, your claims process, or your supply chain and break it into pieces an agent can work within? The engineer who can explain the payment system's rules in three sentences directs an agent that builds the right thing. The one who cannot gets code that compiles and fails the first time a real user touches it.

Third, specification skill. Can they externalize their thinking with enough precision that another intelligence, human or AI, could execute it without a follow-up conversation? Most engineers never had to do this. They held the requirements in their head and typed them directly into the codebase. That worked when the only consumer of their intent was their own hands.

Fourth, judgment under speed. An agent produces code fast. The engineer who cannot evaluate that output just as fast becomes the bottleneck. You need people who can read a function, spot the edge case the agent missed, identify the security vulnerability it introduced, and decide whether to fix or rewrite, all in minutes. You do not train that in a workshop. It comes from years of watching systems break and getting paged when they do.

Fifth, governance instinct. Can they build the compliance framework, not just follow one that somebody else wrote? Can they design an audit trail, not just generate an artifact? This is the qualification that did not exist two years ago, and it is the one that separates a principal from a senior engineer who happens to be good with agents.

Sixth, intellectual honesty. The most dangerous engineer on an AI-native team is the one who accepts agent output without understanding it. You need people who say I do not understand what this does yet instead of it passes the tests, ship it. Hire people who tell you what they do not know. That is not a weakness. It is the instinct that keeps agent-generated code from becoming agent-generated liability.

Now. Here is the operating model. Product managers build proofs of concept, not specifications. A product manager talks to a customer, builds a working proof of concept with an agent, iterates with two customers, and hands engineering something validated.

Engineers harden. They do not greenfield. The engineer's job shifts from interpreting a specification and building to taking working software and making it production-grade. Error handling, load resilience, security review, and monitoring. That is a more demanding job than the old one.

Testing is the Testing Square. The testing pyramid was a financial compromise. Agents removed the cost constraint. An AI-native team runs equal investment across unit, integration, end-to-end, and contract tests. The pyramid is obsolete.

Tooling is standardized. One agent platform. One set of guardrails. Your best salesperson did not pick Salesforce. Your Chief Revenue Officer did. The same logic applies to AI tools.

What about the leadership model? Every executive I walk through this model asks the same question, and it reveals how deeply the old structure is wired into their thinking.

If I have four principals, who manages them?

Nobody manages them. That is the point. What they need is a connector. In the house analogy, this is the general contractor. Not the person who tells the electrician how to wire a panel. The electrician knows how to wire a panel. The general contractor makes sure the right trades show up in the right order, that the permit is pulled, and that the homeowner's change orders get communicated before the drywall goes up. They coordinate sequence and priority. They do not supervise technique.

That person might be a Vice President of Engineering who still builds, or a Chief Technology Officer who has not retreated entirely into strategy decks, or a principal among the four who rotates into the connector role because they happen to be good at translating between business language and technical reality.

What that person is not is a traditional people manager running weekly one-on-ones about career ladders and tracking utilization. And while I am on the subject, the weekly one-on-one has become the code review of Human Resources. In engineering, we held onto manual code review for years after automated testing and linting made most of it redundant, because it felt like the responsible thing to do. The weekly one-on-one is the same pattern in a different department. It was designed for a world where the manager needed to stay close to the work because the people doing it were still learning how. For four self-directed principals who have been shipping software longer than some of your managers have been in the industry, a standing weekly meeting where someone asks how are things going is the organizational equivalent of a senior architect submitting a pull request so a mid-level can approve it.

For most of your organization, the one-on-one still serves a real purpose. For these four people, the structure is backwards. Replace it with what they actually need: a regular conversation with the business about what matters most, and the trust to go build it.

Look. Here is the ninety day proof. You have seen ninety day plans before. This is not a transformation roadmap. It is a proof of concept for a new operating model that either validates the approach or kills it.

Days one to thirty. Map your real value streams. Pick your AI tooling stack. Identify four principals who demonstrate judgment, specification skill, deep domain knowledge, and the governance instinct to own their compliance workflow. Give them a real line-of-business application with real users.

Days thirty-one to sixty. The four principals ship production software. Measure everything: cycle time, defect rate, cost per feature, and compare it honestly to what your traditional team produces. Start the people assessment across the broader organization. Give every engineer four hours a week of real learning time.

Days sixty-one to ninety. Propagate the data. Stand up a second team. Not everyone will qualify on the first pass. That is expected and it is not a verdict on their worth. Some of the people who do not qualify in month two will qualify in month six if you gave them the learning time you promised.

By day ninety you will have proven that four people with deep experience, proper tooling, direct access to the business stakeholder, and ownership of their governance model will outperform twenty people with a management structure and an inherited process framework. Once that proof exists, the conversation shifts from whether to change to how to change in a way that respects the people who got you here.

If the numbers do not materialize by day sixty, kill it. That is the exit ramp. The ninety day proof is designed to fail fast and fail cheap before you commit organizational capital to a restructuring you cannot reverse. The pilot team is four people and one application. The blast radius is contained.

There are questions you should be asking. I have shared this model with enough leaders to know which questions come up in the first ten minutes. Let me answer them honestly rather than pretend they do not exist.

The first is, what happens when one of the four leaves? This is the bus factor question and it is the right one to ask. Four people is a concentration risk. I will not pretend otherwise. But here is what I have observed: the bus factor fear assumes that institutional knowledge lives in people's heads and nowhere else. In the old model, that was true, because the process did not require anyone to externalize their thinking. Principals working with agents externalize everything. The specifications, the architectural decisions, the business rules, and the compliance rationale are all written down because agents need it written down to function. The documentation is a byproduct of the workflow, not an afterthought somebody files after the sprint.

That does not make the risk zero. It makes the risk manageable. You mitigate it the same way a good general contractor does: you pay well, you treat them with respect, you give them interesting work, and you do not pretend they are replaceable by someone cheaper. Retention is not a mystery. Principals stay where the work is real, the autonomy is genuine, and the bureaucracy is low.

The second question is, who carries the pager? Four principals means a four-person on-call rotation. That is tight. You are right to flag it. But consider what you are comparing it to: a twenty-person team where the same two seniors get paged every time because nobody else understands the system well enough to fix it at two A M. I have seen that pattern in every large organization I have worked with. The nominal rotation is twenty people. The effective rotation is two or three. Four principals who all understand the entire system is a better on-call posture than twenty engineers where three of them carry the real weight.

As the portfolio grows and you stand up second and third teams, the on-call capacity grows with it. The key is that every principal on every team can support the systems they built because they built them with agents that documented the decisions along the way.

The third question is, what does the transition actually cost? I am not going to pretend the transition is free. During the ninety day proof you are running both models simultaneously: your existing team continues to operate while the four principals prove the new model. That is a double-carry on your Profit and Loss for one quarter. If the proof works and you decide to restructure, you are looking at severance costs. Budget three to six months per affected employee depending on tenure. You have recruiting costs for principals you do not currently have. Figure fifty thousand to eighty thousand dollars per head through a specialized recruiter or six months of internal sourcing. And there will be a productivity valley during knowledge transfer that lasts two to three months.

Add it up and the transition costs real money, probably one million to two million dollars for a twenty-person team restructuring, depending on your compensation bands and your jurisdiction. That is not nothing. But measure it against the two point nine million dollar annual savings and the six times throughput improvement. The payback period is under a year. Your Chief Financial Officer has approved worse bets with longer payback.

Now. This does not apply to everything we build. I am talking about line-of-business applications. Ranch homes. Your core platform, your machine learning infrastructure, your real-time trading system, your mission-critical systems with fifteen years of regulatory accretion, those are not ranch homes. They are hospitals and bridges, and they require a different conversation about team structure, risk tolerance, and regulatory complexity.

Most enterprises have dozens of ranch homes for every hospital. Start with the ranch homes. The proof compounds.

Finally. There are two tests. If you take nothing else from this conversation, take these two tests. Apply them honestly to your organization, and they will tell you whether you are AI-native or wearing a costume.

Test one. The staffing model. Look at your line-of-business application teams. Are you running four principals who talk directly to the business? Or are you running twenty juniors with a management layer that exists to coordinate people who need coordinating? If your Finance team is still optimizing for the cheapest cost per head and your Human Resources team is still benchmarking against twenty eighteen data, you are not AI-native. You are buying bargain typists for work that no longer requires typing.

Test two. The governance model. Does your team own their workflow, their compliance controls, their audit trail, and their definition of what reviewed and tested and approved mean in an agent-driven world? Or are they running agents inside a governance framework that was designed for humans writing every line by hand? If your governance model predates your AI adoption, you are not AI-native. You are AI-assisted with the same bureaucracy and a faster text editor.

Both tests have to pass. A team of four principals running inside twenty nineteen governance is expensive talent slowed down by inherited process. A team of twenty juniors with a new governance model is the same coordination overhead with fancier compliance artifacts. You need the right people and the right governance together, or neither one works.

I want to close with something that is easy to forget when you are reading about operating models and cost structures. Every person on your current team showed up because you hired them. They moved their families. They turned down other offers. They built their careers around the structure you created. If you are going to change that structure, and I believe you should, you owe them the same respect you would want if someone changed the rules of your job while you were doing it well.

Do it. Do it soon. But do it like someone who remembers what it felt like to be the new kid on the job site, hoping the foreman would teach you something worth knowing.

Is your organization passing both tests today? And if it is not, what happens to the people, the budgets, and the competitive position you are protecting while you wait for Finance and Human Resources to catch up?

Companion