, ,

If Your Engineers Only Get Thirty Minutes to Learn, That Is Not Their Failure. It Is Yours.

·

·

13 min read

Daniel called on a Thursday afternoon. That is not his real name, and I am intentionally blurring the identifying details because the pattern matters more than the person. He had the tone — the one I hear from leaders who are genuinely trying and genuinely stuck. Somewhere between frustrated and exhausted.

He runs engineering at a company doing solid revenue with a couple hundred engineers. He was brought in eighteen months ago to modernize the org. Push AI adoption. Get teams shipping faster. The board wanted transformation. Daniel wanted to deliver it.

I like Daniel. He cares about his people. He is not posturing. He is not checking a box for the CEO. He actually wants this to work.

That is why this conversation was hard.


The First Ten Minutes

The first ten minutes were fine. Daniel walked me through what he had tried. Tooling rollouts. Copilot licenses. A couple of internal hackathons. A Slack channel for AI tips. An LLM evaluation committee. A training budget.

Standard playbook. I have heard it a hundred times.

Then he said the thing that stopped me.

“My teams only really have about thirty minutes a week they can dedicate to learning this stuff. Maybe an hour if we block a Friday afternoon. But realistically — thirty minutes.”

I asked why.

“Sprint commitments. Velocity targets. We are already stretched. I cannot just pull people off delivery to go experiment.”

I did not say anything for a second. Not because I was surprised. Because I was trying to figure out the kindest way to say what I needed to say.


Here Is What I Told Daniel

Daniel. You are asking your engineers to learn the most significant shift in how software gets built since the internet — in thirty minutes a week. Between sprint commitments and velocity targets that were set before any of this existed.

That is not an adoption strategy. That is a signal to your team that you do not actually prioritize this.

I know that is not your intent. I know you care. I know you have budget pressure and delivery targets and a board that wants results by Q3. I know you are trying to balance a dozen things at once and thirty minutes felt like the space you could carve out without blowing up something else.

But here is what your engineers hear when you give them thirty minutes: this is not important enough to change anything for.

And they are rational people. They respond to that signal exactly the way you would. They nod. They attend the optional training. They do not change how they work. And then you wonder why adoption is not sticking.

You built that outcome. Not because you are careless. Because the system you are running was never designed to absorb this kind of change on the margins.


The Question I Asked Next

I asked Daniel if he could show me his value streams.

Not his org chart. Not his team topology. Not his Jira board structure. His value streams. The actual flow of work from idea to production — where it moves, where it waits, where it gets handed off, where it gets blocked, and how long each phase takes.

He paused.

“We have not really mapped that out formally.”

I hear this constantly. I wrote about how to actually do this — a whiteboard, a Tuesday afternoon, and the people who do the work. It is not complicated. It is just revealing. Leaders running hundreds of engineers across dozens of teams, spending millions on tooling and headcount, and they cannot draw the path a unit of work takes from request to delivery.

If you do not know your value streams, you do not know where your time goes. And if you do not know where your time goes, you cannot tell me where AI should be applied. You are guessing. You are buying tools and hoping they land somewhere useful.

Value stream mapping is not a nice-to-have. It is not a process improvement exercise you get to when things calm down. It is the diagnostic that tells you where your organization actually spends its time. Total time: maybe forty-five days to ship a feature. Actual work: maybe eight days. Waiting, handoffs, approvals, queue time: the rest.

You optimized the eight days. You gave your engineers thirty minutes a week to optimize the eight days. Meanwhile thirty-seven days of waste sit there untouched because nobody mapped it.

That is not an AI problem. That is a leadership problem.


The Manual QA Defense

Then Daniel brought up testing.

“We still need our manual QA process. Our product is complex. Automated testing does not catch everything. We have tried.”

I asked him what the ROI looked like on the current manual QA setup.

Silence.

Not because there was no ROI. Because nobody had measured it. The team had always had manual QA. It was load-bearing. It was how things worked. Questioning it felt like questioning whether the building needed walls.

Here is the thing. I am not telling you automated testing catches everything. I am not telling you to fire your QA team tomorrow. I am telling you that if you are defending a process that you cannot measure — in an era where AI can generate, execute, and maintain test suites at a scale your manual team will never match — you are not making a technical argument. You are making an emotional one. I wrote about why the separate quality organization expired and about how the Testing Pyramid itself was a financial compromise that no longer applies.

And that is fine. Change is hard. Letting go of something that kept you safe for years is genuinely difficult. I respect that.

But you called me because adoption is not working. And when I look at your organization — engineers with thirty minutes to learn, value streams nobody has mapped, a manual QA process defended on instinct instead of data — I can see exactly why it is not working. Anyone looking at this from the outside can see it. And I suspect the honest people inside your organization see it too. They are just not sure it is safe to say it.


The Process Framework Addiction

Daniel mentioned that his teams were “running a hybrid Agile-Kanban approach” and that they were “still tuning it.”

I want to be careful here because I know how much energy organizations have poured into process frameworks. I know teams that spent years getting good at Scrum. I know managers whose entire identity is wrapped up in being excellent Agile coaches. I know organizations that restructured reporting lines around SAFe.

That investment was real. The effort was real. The people who led it were not wrong at the time.

But the conversation about whether Agile or Kanban or some hybrid is the right framework — that conversation ended two years ago. Not because the frameworks were bad. Because the constraint they were designed to manage changed.

Agile was a response to uncertainty in a world where building was slow and expensive. You could not predict what to build, so you iterated in short cycles. Kanban was a response to flow problems — visualize work, limit work in progress, optimize throughput. Both frameworks optimized around a world where human engineering capacity was the bottleneck.

That bottleneck moved.

When an engineer with an AI agent can produce in a day what used to take a sprint, the question is no longer “how do we organize our sprints.” The question is “what do we build and how fast can we validate it.” That is a fundamentally different question, and Agile and Kanban were not designed to answer it.

If you are still tuning your hybrid framework, you are polishing a system that was built for a constraint that no longer binds. That is not transformation. That is comfort.


The Part You Own

Daniel, I want to be direct with you because I think you can hear it.

This is not an adoption problem. This is not a tooling problem. This is not an engineer motivation problem.

This is a leadership problem. And you own it.

You own the sprint structure that leaves thirty minutes for learning. You own the missing value stream maps. You own the unmeasured QA process. You own the process framework that nobody has questioned in two years. You own the incentive structure that rewards delivery velocity over capability growth.

Your engineers are doing exactly what you asked them to do. They are shipping against sprint commitments, maintaining velocity targets, and following the process. They are rational actors in the system you built.

If that system does not produce AI adoption, the system is the problem. Not the people inside it.


What I Told Daniel to Do

Map your value streams. All of them. Not the idealized version. The real one. Where does work actually flow? Where does it actually wait? Put numbers on it. Total lead time. Active work time. Wait time. Handoff count. Approval gates. Manual steps that could be automated. Do this for your three most critical delivery paths. It will take two weeks with a small team. It will be the most important two weeks you spend this quarter.

Know your productivity metrics. Not story points. Not velocity. The real ones. Cycle time. Deployment frequency. Change failure rate. Time to restore. Customer-facing lead time. If you cannot produce these numbers by team within an hour of being asked, you are flying blind. You cannot improve what you have not measured, and you cannot adopt AI strategically if you do not know where the leverage is.

Give your engineers real time. Not thirty minutes scraped from the margins. Dedicated time. Structured time. Time where the expectation is not “also keep shipping” but “learn how your job is changing.” Two hours a week minimum. Four is better. Yes, velocity will dip for a quarter. That is the investment. If you cannot make that trade, you are telling me the transformation is not actually a priority — and your engineers already know it.

Stop defending processes you cannot measure. If your manual QA process is essential, prove it. Measure defect escape rate. Measure the cost per bug found manually versus what an AI-generated test suite catches. If the numbers support manual QA, keep it. If they do not — and I suspect they do not — have the courage to change.

Stop tuning the process framework. Your teams do not need a better hybrid Agile-Kanban configuration. They need clarity on what to build, time to learn new tools, and a delivery path with thirty-seven fewer days of waste in it. That is a value stream problem, not a ceremony problem.


You Are Holding an Oil Can Next to a Steam Engine

I want to say something that might be harder to hear than everything else.

Everything I just described — the value stream mapping, the metrics, the dedicated learning time, the QA measurement — that is the minimum. That is the floor. And honestly, Daniel, I am not sure the floor is enough for where you are.

Because here is what I see when I look at the whole picture.

You are treating AI like an oil gun. You are walking around your organization squirting lubricant on the gears of your current machine. A little Copilot here. A hackathon there. Thirty minutes of learning greased into the sprint schedule. And when the machine still runs slow, you try a different oil. A different vendor. A different training format.

But Daniel — the machine is a steam engine. And it is 2026.

You are not in a world where better lubrication fixes the problem. You are in a world where the mode of transportation changed. You would not stand next to a coal-fired car in 2026 and say “we just need to optimize our fuel intake process.” You would look at it and ask why you are still burning coal.

That is the question I need you to ask about your engineering organization. Not “how do we adopt AI into our current system.” But “why does our current system still look like this.”


First Principles. Not Incremental Improvement.

This is the part where I stop talking to Daniel and start talking to whoever is above Daniel.

If your engineering leader has had eighteen months — and in AI time, eighteen months is a decade — and the organization still cannot tell you its value streams, still defends unmeasured processes on instinct, still gives engineers thirty minutes a week to learn, and still debates which flavor of Agile to run — then this is not a problem you solve by tuning harder. This is a problem you solve by going back to first principles.

What would this organization look like if you built it today? Not inherited it. Built it. From scratch. With everything you now know about what AI agents can do, about what a small team with the right tools can ship, about how fast the market is moving.

You would not build what you have. Nobody would.

You would build something leaner. Something where engineers spend most of their time building, not waiting. Something where customer signal reaches the team in hours, not quarters. Something where testing is continuous and automated, not manual and ceremonial. Something where the question “what framework are we running” does not even make sense because the cadence is dictated by what customers need, not by what a process consultant recommended four years ago.

That is the organization you need. And you do not get there by greasing the steam engine. You get there by building the new thing alongside it.

Stand up a parallel team. Small. Five to ten people. Give them real autonomy, real tools, real time, and a real customer problem. No legacy process. No inherited sprint structure. No Jira board configured by someone who left two years ago. First principles. Let them build the way building works now. Measure what they produce. Compare it honestly to what the rest of the organization produces.

That comparison will tell you everything you need to know.


This Is a CEO and Board Conversation Now

I say this with genuine care for Daniel and for leaders like him. But if your engineering leader has had eighteen months and could not create the conditions for transformation — could not clear the calendar, could not map the value streams, could not question the sacred processes — then the CEO and the board need to ask an honest question.

Is this a resources problem or a leadership problem?

Maybe Daniel needs more air cover. Maybe he needs the CEO to publicly reprioritize and take the heat for a velocity dip. Maybe he needs a board that understands the investment required and gives him room to make it. If that is the case, then the failure is upstream — and the CEO owns it.

Or maybe the role needs someone who will not ask permission to do what is obviously necessary. Someone who walks in on day one, maps the value streams by day fifteen, stands up the parallel team by day thirty, and has data by day sixty that makes the case for everything else.

Either way, the current trajectory does not work. I am not being dramatic. I am doing math.

It is early 2026. By 2028, the gap between organizations that figured this out and organizations that are still greasing the steam engine will not be a competitive disadvantage. It will be an existential one.

Five kids from Stanford with AI agents and a clear customer problem will be shipping faster than your two hundred engineers. Not because they are smarter. Because they are not dragging forty-five-day cycle times, manual QA ceremonies, process framework debates, and thirty-minute learning blocks behind them.

They started with a blank page. You are still annotating a page someone wrote in 2019.

That is the gap. And it is widening every month you spend optimizing what you have instead of building what you need.


The Kind Part

I told Daniel all of this. Every word. And I meant it with genuine respect.

He is not failing because he is incompetent. He is not failing because he does not care. He is failing because he is trying to pour a new future into an old container and wondering why it will not fit.

Most leaders I talk to are in Daniel’s position. Good people. Smart people. People who genuinely lose sleep over their teams and their organizations. They are stuck not because they lack intent but because nobody has told them clearly enough: you cannot incrementally improve your way to a fundamentally different operating model. At some point you have to stop lubricating the steam engine and build the new thing.

That point was eighteen months ago. But today is the second best time.

Map the value streams. Measure what matters. Stand up the parallel team. Give your people real time to learn. And if the organizational structure will not let you do those things — if the sprint commitments and the velocity targets and the process ceremonies are more powerful than the transformation mandate — then escalate. Go to the CEO. Go to the board. Tell them what you told me. Tell them the truth.

Because the market is not waiting for you to finish tuning your hybrid Agile-Kanban configuration.

The market is not waiting at all.

Written by

Every article, narrated. Listen while you ship.
From the Author

Essential or Ornamental

Three companies. Three choices. One satisfactory ending.

One does nothing. One maps the waste. One bets everything on twelve people in a warehouse.

Read free online →

One useful note a week

Get one good email a week.

Short notes on AI-native software leadership. No launch sequence. No funnel theater.