, ,

Your Engineering Team Ships in 28 Days. Ten of Those Days Are Work. The Other Eighteen Are a Leadership Problem.

·

·

21 min read

It is 2:15 on a Thursday afternoon and your VP of Engineering is presenting a slide that says “28-day average cycle time.”

Twenty-eight days from idea to production. Your board has been staring at that number for three quarters. Your CEO mentioned it at the last all-hands. Your CTO gets asked about it in every one-on-one with the COO, and every time she gives the same answer — they are working on it, they hired two more engineers last quarter, they moved to two-week sprints, they adopted a new CI pipeline that cost $340,000 a year.

Nobody in the room has asked the only question that matters.

Of those 28 days, how many are work?

Not waiting. Not sitting in a queue. Not blocked by an approval that takes four days because the approver is in meetings until Thursday. Not parked in a staging environment that three teams share and one team monopolizes. Not stalled in a code review backlog because your senior engineers are in six hours of meetings a day and review happens in the gaps between lunch and their 2 p.m.

Work. Fingers on keyboard. Code being written, tested, reviewed, deployed.

The answer, at most organizations I work with, is ten. Sometimes eight. Sometimes twelve if you are generous about what counts as “work.”

Ten days of work. Eighteen days of wait.

Sixty-four percent of your cycle time — not doing anything. Not because your engineers are lazy. Not because they lack talent. Because your organization is designed to create queues, and nobody has ever drawn a picture of it.

That picture has a name. It is called a value stream map. Most executives in software never bother to draw one.


What a Value Stream Map Actually Is

A value stream map is not complicated. It is not a six-month consulting engagement. It is not a software tool that requires a license and a training program. It is not an SAFe ceremony or an Agile ritual or a Lean Six Sigma belt-earning exercise.

It is a picture. On a whiteboard. Of how work actually moves through your organization.

Not how your process documentation says it moves. Not how your Jira workflow assumes it moves. Not how your PMO reports it moves in the monthly steering committee deck. How it actually moves. From the moment someone says “we should build this” to the moment a paying customer uses it in production.

You draw every step. You write how long each step takes. You write how long work waits between steps.

That is the whole method.

Taiichi Ohno invented this at Toyota in the 1950s. The manufacturing world has used it for seventy years to eliminate waste worth billions. Your CTO probably studied it in an operations management course and then left it there.

This is what it looks like in real life.


A Tuesday Afternoon With a Whiteboard

You need a conference room. A whiteboard. Sticky notes in three colors. Markers. And the people who actually do the work.

Not the people who manage the people who do the work. The people who touch the code. Who file the tickets. Who run the deployments. Who approve the pull requests. Who sign off on the security reviews. Who push the button that sends software to production.

Get them in a room. Two hours. Three at most. No laptops.

Start at the right side of the whiteboard. Write “customer using the feature in production.” That is the finish line. Everything you draw flows toward that.

Now go to the left side. Write “idea approved” or “ticket created” or whatever your starting trigger is. That is the starting line.

Now fill in the middle.

What happens first? Someone writes a spec? A product manager creates a Jira ticket? A designer produces a mockup? Write it down. How long does that take? Not the calendar time — the actual work time. The designer spends two hours on the mockup. Write “2 hours.” But the ticket sat in the design backlog for five days before she picked it up. Write “5 days wait” between the previous step and the design step.

Keep going.

The designer finishes. Where does it go? To engineering. How long does it sit before an engineer picks it up? Three days. The engineer works on it. How long? Two days. Then it goes to code review. How long does code review take? The reviewer spends 45 minutes. But the pull request sat in the review queue for two days because the reviewer was in meetings.

Then QA. One day of work. Two days of wait. Then staging deployment. The deployment itself takes 20 minutes but the staging environment is shared and the queue is three days. Then product sign-off. An hour of work. Two days of wait because the product manager was at a conference and then had to clear her backlog.

Then security review. If you need one. A day of work. Five days of wait because your single security engineer reviews everything sequentially and her backlog is measured in weeks, not days.

Then production deployment. If you have CI/CD, minutes. If you do not, add a change approval board — a weekly meeting where six people review a spreadsheet of pending changes and approve deployments they do not understand.

Draw it all. Every step. Every wait. Be honest. Be specific. Use actuals, not targets. Not what your process says should happen. What actually happened on the last three features your team shipped.

When you are done, add up the work time. Add up the wait time.


The Number That Changes Everything

Total time: 28 days. Actual work: 10 days. Waiting: 18 days.

Look at those numbers. Your engineering team did not have a productivity problem. Your organization had a flow problem. Still has one.

Those 18 days of wait are not engineering days. They are organizational days. They are the accumulated cost of handoffs, queues, shared resources, approval gates, and calendar conflicts that you — or the leader before you — put in place.

And here is what most executives miss: when you invested in new CI/CD tooling, or adopted AI coding assistants, or ran that velocity improvement initiative — you optimized the 10 days. The work time. You made the 2-day engineering task take 1.5 days. You made code review 20% faster. You saved maybe a day and a half across the entire stream.

You optimized 35% of your cycle time and left 65% untouched.

That is not a technology problem. That is a leadership problem.

Those 18 days of wait exist because of decisions you made. Or decisions you inherited and never questioned. The approval gates. The shared staging environment. The weekly change advisory board. The six hours of meetings that keep your senior engineers out of code review. The handoff from design to engineering that requires a spec document nobody will read twice.

You designed the wait.

Or you allowed it to persist because nobody drew the picture that made it visible.


Why Your Organization Has Never Done This

I ask this question in every engagement. “Have you ever mapped your value stream — end to end — idea to production?”

The answer is almost always no.

Every engineering leader has heard of value stream mapping. It is in every Lean book. Every DevOps conference talk. Every consulting deck about improving software delivery. It is not obscure.

They just never do it.

The excuses are always the same.

“We do not have time.” You do not have two hours — once — to understand why 64% of your cycle time is wait? You have time for a weekly change advisory board that takes 90 minutes and adds three days of latency to every deployment. But you do not have two hours to question whether that meeting should exist.

“Our process is too complex to map.” Your process is a series of steps with waits between them. It is not more complex than a Toyota assembly line, and Toyota mapped theirs in the 1950s with pencils. If your process is too complex to draw on a whiteboard, that is the finding. The complexity itself is the problem. You just discovered it without spending a dollar.

“We already have metrics.” You have metrics that measure what your tools capture. Jira measures ticket throughput. GitHub measures pull request cycle time. Your CI system measures build duration. None of those measure the full stream. None of them capture the five days a ticket sat in a design backlog. None of them capture the three days a pull request waited because the reviewer was in meetings. None of them capture the week a feature waited for the change approval board’s Thursday slot. Your metrics are partial. They measure the parts of the stream that are already instrumented, which are the parts that are already fast. They do not measure the wait — which is where the money is.

“That is a manufacturing thing.” Software delivery is a flow system. Work enters. Work moves through stages. Work exits as value. The physics are identical. The inventory in a software organization is not physical — it is partially completed work sitting in queues. But the cost is real. Every feature waiting in a queue is value you built but have not delivered. It is inventory. It has a carrying cost. And unlike physical inventory, it degrades — the longer a feature waits, the more likely the context is stale, the requirements have shifted, or the market has moved past the problem.


What You Will Find

I have done this exercise with dozens of organizations. The pattern is consistent enough to predict.

You will find that 55% to 75% of your cycle time is wait. Not work. Wait. This is not unusual — it is the norm. Most organizations have never looked at the ratio, so the number is a shock. But the math is straightforward. Every handoff creates a queue. Every shared resource creates contention. Every approval gate creates a calendar dependency. The more handoffs, shared resources, and gates you have, the more wait you have.

You will find one or two massive bottlenecks that account for the majority of wait time. It is rarely distributed evenly. Usually there is one stage where work piles up. Maybe it is code review — because your three senior engineers are the only ones authorized to approve, and they are in meetings six hours a day. Maybe it is the staging environment — because three teams share one and the deploy queue is first-come-first-served with no SLA. Maybe it is the security review — because you have one security engineer and she reviews everything, and her backlog is three weeks deep.

You will find invisible queues nobody designed. Work sitting in Slack threads waiting for a decision that nobody realized was pending. Features waiting for a design review that is not on anyone’s calendar because the process says “design sign-off required” but does not say when or by whom. Deployments waiting for a manual step that was automated two years ago in one team but not in the other three. These are the queues that do not show up in Jira because Jira tracks the work states, not the wait states.

You will find rituals that persist without delivering value. The weekly status meeting where eight engineers give updates that could have been a dashboard. The sprint retrospective that produces action items nobody tracks. The quarterly planning ceremony that takes two weeks of the entire organization’s time and produces a plan that is obsolete by week three. These rituals were created for legitimate reasons. The reasons are gone. The meetings are not. Meetings, once created, live forever.


The Rituals Your Predecessors Created

Let me be specific about this. Because I find that most executives in technology organizations do not understand how the traditions they maintain came into being.

Your change advisory board — the weekly meeting where a committee reviews and approves production deployments — was created because in 2009 someone pushed bad code to production and it caused a four-hour outage that cost the company $200,000. Your CTO at the time — not you, the one three CTOs ago — convened a meeting and said “from now on, every deployment gets reviewed before it goes live.” The meeting has run every week since.

Seventeen years later, you have automated testing. Continuous integration. Canary deployments. Feature flags. Instant rollback capability. The risk that created the meeting is mitigated by six different automated systems that did not exist when the meeting was invented.

But the meeting still runs. Every Thursday. Ninety minutes. Eight people. Three days of latency added to every deployment.

Nobody canceled it because nobody knows why it was created. The institutional memory left the building in 2012. The rationale was never documented. The meeting exists because the meeting exists.

Your two-approver code review requirement — every pull request needs sign-off from two senior engineers — was created because a junior developer shipped a SQL injection vulnerability in 2016. Your Director of Engineering at the time — not the current one, two directors ago — mandated that all code needed two senior reviewers. Ten years later, you have automated security scanning, SAST tools, dependency checking, and AI-powered code review that catches injection vulnerabilities before a human sees the code. The risk is mitigated by tools that did not exist when the policy was written. The policy persists. It costs you 2.3 days of cycle time on every feature because your three senior engineers are drowning in review requests while sitting in six hours of meetings a day.

Your quarterly planning process — the two-week exercise where every team estimates quarterly commitments and negotiates dependencies — was created because in 2014 two teams built the same feature independently and nobody noticed until both were in production. The planning process was supposed to create coordination. It created a two-week organizational pause four times a year — eight weeks of reduced output annually — plus the overhead of maintaining plans that are wrong by week three. The duplicate-feature problem was solved in 2017 when you moved to a monorepo. The planning process survived.

These are composites from real organizations. Your details will differ. The structure usually does not. Rituals created by people who are no longer here to solve problems that no longer exist — still alive because no one has the authority, the awareness, or the political capital to question them.

And if you are the person who created those rituals — if you are at a company that grew from 20 engineers to 150 in four years and the bottlenecks are not inherited from three CTOs ago but are things you set up when there were 30 people — the exercise is the same but the politics are harder. It is easier to kill your predecessor’s rituals than your own. Do it anyway.

A value stream map makes them visible. It does not kill the rituals. You do. But it gives you the evidence to act without guessing.


How to Actually Do It: Step by Step

I said this takes a Tuesday afternoon. Here is what that looks like.

Before the Session

Pick one value stream. Not all of them. One. Your most important feature delivery path — the one that takes a customer request and turns it into production software.

Identify the people who actually touch the work along that path. Not their managers. The engineers, designers, QA specialists, security reviewers, DevOps engineers, product managers, and release managers who do the actual work. You need six to ten of them in the room. If you bring managers, they will describe the process they think exists. If you bring practitioners, they will describe the process that actually exists.

Book a room with a large whiteboard or a long blank wall. Bring sticky notes in three colors — one for process steps, one for wait states, and one for data (time, queue depth, defect rates). Bring markers.

During the Session

Step 1: Draw the boundaries. Mark the start event (ticket created, feature approved, whatever triggers work) and the end event (customer using the feature in production). Write them on the far left and far right of the whiteboard.

Step 2: Walk the stream forward. Start at the beginning. Ask: “What happens first?” Write it on a sticky note. Then: “What happens next?” Write that. Keep going until you reach production. Do not skip steps. Do not abbreviate. If there is a Slack message that informally triggers a handoff, that is a step. If there is a waiting period where work sits in someone’s “I’ll get to it” queue, that is a step. Map reality.

Step 3: Add the time data. For each process step, write two numbers. Process time — how long the work actually takes when someone is doing it. And lead time — how long it takes from when the work arrives at that step to when it leaves, including all waiting. The difference between lead time and process time is the wait. That is what you are looking for.

Step 4: Identify the queues. Between every process step, draw a triangle (the lean symbol for inventory/queue). Estimate how many items are typically waiting in that queue. If the code review backlog usually has eight pull requests waiting, write “8.” If the security review queue has a three-week backlog, write “15 business days.” These queues are your inventory — work you have invested in but have not delivered.

Step 5: Calculate the efficiency. Add up all the process times. Add up all the lead times. Divide process time by lead time. That is your flow efficiency. Below 15% is common. Below 25% is normal. Above 40% is good. Most software organizations, the first time they do this, land between 8% and 20%.

Step 6: Find the constraint. Look at the map. Where is the longest wait? Where is the biggest queue? That is your constraint. That is where improvement has the most leverage. Not the fastest step — the slowest queue.

After the Session

Take a photo. Digitize it if you want — a simple diagram in any tool works fine. Share it with your leadership team. Let the picture speak.

Then ask one question: what is the one change that would have the most impact on the biggest bottleneck?

Not ten changes. One. The theory of constraints says that improving anything other than the bottleneck is an illusion. Fix the constraint. Then map again. Find the new constraint. Fix that.


What Happens When AI Enters the Value Stream

This is where most executive teams get the math wrong.

You adopted AI coding assistants. Your engineers report that coding is 30% faster. Maybe 40%. Maybe — and this is the number your tool vendor put in a press release — 55%.

Great. Coding was already the fastest part of your value stream.

Go back to the map. Coding — the actual writing of code — was maybe 2 days out of your 28-day cycle. You just made it 1.2 days. You saved 0.8 days.

Your 28-day cycle is now a 27.2-day cycle. Congratulations.

Meanwhile, the 18 days of wait are untouched. The code review backlog is untouched. The staging queue is untouched. The weekly change advisory board still meets every Thursday. The security review backlog is still three weeks deep. The quarterly planning ceremony still consumes two weeks four times a year.

You optimized the thing that was already fast and ignored the thing that was already slow.

This is why your CFO sees 5% improvement on the P&L while your engineers report 30%. The engineering work got faster. The organizational flow did not. The gains are real — they are sitting in queues, absorbed by wait states that were invisible before and are invisible now.

A value stream map would have told you this before you wrote the check.

Here is the part that matters. AI does not just make coding faster. If you think about AI correctly — and most executives do not — AI can eliminate entire wait states.

Code review? An AI agent reviews every pull request in seconds, catching security vulnerabilities, logic errors, and style violations before a human looks at it. Your senior engineers still do the architectural review. But the three-day queue caused by routine review? Gone.

Security scanning? Automated. Continuous. Every commit. Every dependency. Every configuration change. Not a weekly batch process where a security engineer reviews a queue — a guardrail, not a gate. The three-week security review backlog does not need to exist.

Test creation and execution? Agents generate tests from specifications, run them against every change, and report results before the developer finishes their coffee. The QA queue? Shortened from days to hours.

Documentation? Agents generate release notes, API documentation, and changelog entries from the code itself. The documentation step that added two days and got skipped half the time anyway? Automated.

Deployment approval? If your automated test suite, security scanning, and monitoring are comprehensive enough — and agents make them comprehensive enough — the change advisory board becomes a rubber stamp on decisions already made by automated systems. The three-day wait for the Thursday meeting? Replaced by automated governance that runs in minutes.

This is what a value stream map shows you once you understand AI. The opportunity is not making the fast parts faster. It is eliminating the slow parts entirely. The handoffs. The queues. The approval gates designed for a world where human review was the only kind of review that existed.

A caveat that matters: if you are in a regulated industry — financial services, healthcare, energy, defense — some of your wait states are not organizational inertia. They are compliance requirements. Your change advisory board may be partially mandated by SOC 2, HIPAA, NERC CIP, or your regulator’s interpretation of change management. The value stream map will still show you which waits are compliance and which are habit. In my experience, the ratio is rarely what people expect. Organizations that tell me “we cannot change our process because of regulation” usually find that 30% of their gates are genuinely required and 70% are organizational additions that accumulated around the regulatory core. The map separates the two.

Your value stream map is the strategy document for your AI investment. Not a vendor evaluation matrix. Not a tool comparison spreadsheet. Not the CFO picking tools based on cost per seat. A picture of where your time goes — and where AI can give it back.


What It Looks Like When You Fix the Constraint

A $45 million B2B SaaS company. Eighty-two engineers. Average cycle time: 34 days.

They drew the picture on a Wednesday afternoon in March. The primary constraint was not where anyone expected. It was not code review. It was not deployment. It was the handoff between product and engineering — a spec-review process that required three stakeholders to approve a product requirements document before engineering could begin. Average wait at that gate: nine days. Not because the reviewers were slow — because they had thirty other specs in the queue and no SLA on turnaround.

They killed the spec-review gate. Replaced it with a 30-minute sync where the PM walked the lead engineer through a working prototype instead of a document. Engineering started building the same day.

Second constraint: a shared staging environment with a five-day queue. Three teams fighting for one environment. They spun up per-team ephemeral environments using their existing cloud infrastructure. Cost: $1,800 a month. Wait reduction: five days to zero.

Third constraint: code review. Their five senior engineers were the only approved reviewers. They expanded the reviewer pool to include four mid-level engineers who had been shipping for over a year, added an AI-powered first-pass review for security and style, and kept senior review for architectural decisions. Review wait dropped from 3.2 days to 0.7 days.

Total elapsed time: eleven weeks from the first whiteboard session to stable measurement of the new process.

Cycle time dropped from 34 days to 16. Flow efficiency improved from 11% to 28%. The company shipped more features in Q3 than in the previous two quarters combined — without hiring a single additional engineer.

That is what the picture shows you. Fix the constraint and the numbers move fast.


The Economics of Wait

Let me make this concrete.

Your engineering organization costs $20 million a year. Fully loaded — salaries, benefits, equity, tools, infrastructure, office space, management overhead. One hundred engineers at $200,000 average fully loaded cost.

Your flow efficiency is 15%. That is normal. That means your organization produces 15 cents of delivered value for every dollar of engineering time that passes through the system. The other 85 cents is wait.

$17 million tied up in a system that delivers too slowly. Not because your engineers are staring at walls. They are busy. They are in meetings. They are context-switching between three projects because each one is blocked waiting for something. They are writing documentation that an agent should generate. They are attending ceremonies that should be async updates. They are manually reviewing code that machines should pre-screen.

Busy. Not flowing.

The economic harm is not that the money vanishes. It is delayed time-to-market, compounding opportunity cost, and the slow erosion of competitive position that happens when your organization takes 28 days to deliver what your competitor delivers in 12. That is the real cost of wait — not idle salaries, but revenue, retention, and market position slipping away while work sits in queues.

Organizations I have worked with typically improve flow efficiency by 10 to 20 percentage points in the first two to three quarters after addressing their primary constraint. The first measurable improvement — visible in cycle time data, not just in how things feel — shows up within four to six weeks. Improve from 15% to 25% or 30% and you do not need to hire 100 more engineers. You just unlocked capacity equivalent to the ones you already have.

That is the absorption gap. The difference between the productivity your tools create and the value your organization actually captures. Your engineers report 30% faster. Your P&L shows 5%. The gap — $2.25 million per year at a $20 million org — is sitting in your value stream. Waiting for someone to draw the picture.


Objections I Hear

“We already use SAFe / Scrum / Kanban. We have boards.” You have a task management system. Jira shows you where tickets are. It does not show you how long they wait between steps. It does not show you queue depth. It does not capture the handoffs that happen outside the tool — the Slack messages, the emails, the verbal approvals in hallway conversations that nobody logs. A Kanban board is not a value stream map. It is one input to a value stream map.

“We cannot measure all of this.” You do not need precision. You need visibility. Rough estimates from the people who do the work are good enough for the first map. If your engineer says “code review usually takes about two days to get picked up,” that is data. You are not building a mathematical model. You are drawing a picture of where time goes. Directionally accurate is sufficient to identify the constraint.

“What if the results are politically uncomfortable?” They will be. The biggest bottleneck in your value stream is almost never a technical system. It is a person, a team, or a policy. The security review bottleneck exists because one person does all reviews. The code review bottleneck exists because you mandated two senior approvals and you have three seniors. The deployment bottleneck exists because a VP created a change advisory board after an incident in 2009 and nobody has questioned it since. The map gives you evidence. What you do with the evidence — that is leadership.


Start This Week

You need a whiteboard. You need the people who do the work. You need two hours.

You do not need a consultant. You do not need a tool. You do not need permission from your board. You do not need a transformation initiative, a change management program, or a steering committee.

You need the willingness to look at how your organization actually works. Not how you think it works. Not how your process documentation says it works. Not how the dashboards report it works.

How it actually works.

Draw the picture. Add the time. Find the bottleneck.

The map itself is free. What comes next depends on what the picture shows you. If the constraint is a meeting you can cancel, cancel it Monday. If it is a shared resource you can replicate, replicate it this sprint. If it is something deeper — an organizational design problem, a cross-functional handoff that requires two departments to change how they work, an AI automation strategy that needs to be scoped and built — that is where an outside perspective pays for itself, usually inside two quarters.

But start with the picture. The picture costs nothing and reveals everything.

Not ten things. One. The biggest constraint. Remove it. Then draw the picture again.

Your engineers are faster. Your organization is not. The gap — $2.25 million per year at a $20 million org — is sitting in your value stream. Visible. Measurable. Fixable.

Draw it.

Stay current

Free knowledge, delivered weekly.

No hooks, no pitches. We share what we know because we all deserve better software.