8 min read
I have sat in too many leadership meetings talking about activity metrics. Last year I sat in a quarterly review where a transformation lead presented forty-seven slides. I counted. Forty-seven slides of activity metrics. Number of people who completed AI training. Number of workshops delivered. Number of teams that participated in a pilot. Number of tools procured. Number of lunch-and-learns scheduled (they actually tracked this). Number of Slack channels created for “AI exploration.”
The executive sponsor nodded along. The CFO was on her phone. The CTO asked one question at the end: “What changed?”
Silence. Not uncomfortable silence. Confused silence. The transformation lead genuinely did not understand the question. She had just shown forty-seven slides of things that happened. Was that not the answer?
Here is the part nobody in that room wanted to say out loud: she was reporting exactly what she was asked to report. The KPIs on her dashboard were the KPIs that leadership approved. “People trained” was a KPI because someone in the C-suite put it there. The executives nodding along built the scorecard that measured activity, then acted surprised when activity was all they got.
The system produced the outcome the system was designed to produce.
The Difference Between Theater and Precondition
I want to be precise about something before I go further, because this is where organizations get it wrong in both directions.
Most readers also read: Stop Reviewing Code. Start Proving It Works. My Take on AI in the Quality Process of Software.
Some training is theater. A four-part webinar series that nobody watches. A Slack channel with links to articles. A Friday afternoon lunch-and-learn where people eat the pizza and check their phones. That is activity-as-deliverable, and it should be called what it is.
But some training is a genuine precondition for change. Teaching a team of senior engineers how agent-assisted development actually works in their codebase, with their architecture, against their deployment pipeline, that is not theater. Building psychological safety so people can say “I do not understand how to use this tool” without feeling like they are volunteering for a layoff list, that is not theater either. Change readiness work, skills confidence work, the human side of making people willing and able to operate differently, all of that matters.
The problem is not that organizations train people. The problem is that organizations count the training and call it done. Two hundred people completed the course. Check the box. Move to the next slide. Nobody asks whether those two hundred people changed a single thing about how they work on Monday morning.
The Dollar Cost of Motion Without Progress
You are about to embark on this. Let me walk you through it before you sign anything.
You will hire a transformation team. Six good people, fully loaded, around $1.2 million a year. They will build a beautiful roadmap and put it on a wall. You will feel like things are moving.
You will hire external consultants for workshops and assessments. Another $400,000. They will be smart and they will be helpful and they will produce a maturity model with five tidy stages. You will be at stage two, and you will agree that stage three is where you want to be by Q4.
You will buy tooling licenses for eight hundred engineers. $600,000 a year. The vendor will assign a customer success manager who is genuinely good at her job. She will send you weekly utilization reports and you will read them.
You will stand up an executive steering committee. Twelve VPs in the room every month for ninety minutes. Do the math on what that room costs per hour. I will wait.
Eighteen months will pass. Your CTO will walk into a quarterly review and ask the only question that matters: what changed? And the answer will be that tool utilization has gone from eight percent to twenty-three percent.
You will have spent $2.2 million to get a quarter of your engineers to open a tool. Not to change how they build software. Not to reduce time-to-market. Not to shift a single business outcome. To log in more often.
Your CFO will notice. The CFO always notices eventually.
If you run a forty-person shop, run the same parable at smaller scale. You will spend $40,000 on licenses and $15,000 on a training consultant. Your metric will be “percentage of engineers who activated their license.” That is a procurement metric, not a business metric. Next year your CFO will review the renewal — your CFO reviews every line item — and ask what the $55,000 bought in features shipped, defects reduced, or cycle time compressed. You will not be able to answer. The renewal will not happen.
I am not telling you a story about somebody else. I am telling you the story you are about to live, unless you change one thing before you start: decide what success looks like in business terms, and refuse to spend a dollar that is not in service of it.
Outcome-Based Planning Starts at the End
The fix is not complicated to describe. It is difficult to execute because it requires leaders to commit to specific, measurable changes in how the business operates.
Outcome-based planning starts with one question: what is different about this organization in six months if the initiative succeeds?
I push teams to answer this in terms that cross functional boundaries, not just engineering metrics. Here is what real outcomes look like:
-
Our payments team reduces time from commit to production from six weeks to two weeks, and we measure it every sprint. (Engineering)
-
Our finance close process moves from fourteen business days to five because three manual reconciliation steps are automated. (Finance ops)
-
Customer onboarding drops from three weeks to three days because the intake workflow no longer routes through four departments sequentially. (Operations)
-
Claims processing time drops forty percent because the rules engine that took nine months to update can now be modified and tested in two weeks. (Insurance ops)
You can verify every one of those. You can point to them in a room and say “we did this” or “we did not do this.” A CFO can attach a dollar figure to each one. A COO can map them to the P&L.
Once you have the end state defined, you work backward. What has to be true for the payments team to hit two-week cycle time? What has to change in the finance close process for three steps to be automated? Who owns the customer onboarding workflow, and what approval do they need to redesign it? Those questions produce a plan. Not a list of activities. A set of conditions that need to exist.
The activities (the training, the workshops, the tooling) might still happen. But they happen in service of a defined outcome. You do not train two hundred people because training two hundred people is the goal. You train the twelve people on the payments team because they need specific skills to hit the specific outcome you committed to.
Why This Requires Air Cover
I have to be honest about something that most articles on this topic skip. Outcome-based planning is a career risk for whoever owns the outcome.
Activity metrics can only go up. You will never have fewer workshops delivered this quarter than last quarter. The trendline is always positive.
Outcome metrics can fail. You committed to a forty percent reduction in claims processing time and you got fifteen percent because the compliance team would not approve the new rules engine workflow. You committed to five-day finance close and you are stuck at nine because the ERP integration took longer than estimated. That failure is visible. It is on a slide. It has your name on it.
I have watched what happens next in organizations that do not have executive air cover for the transformation lead. The transformation lead gets blamed. Not the compliance team that blocked the workflow. Not the ERP vendor that missed the integration timeline. The person who committed to the outcome and missed it. They get quietly moved to a different role, and the next person learns the lesson: measure activities, not outcomes, because activities cannot fail.
This is why outcome-based planning has to be sponsored from the top, and the sponsorship has to be real. The CTO or COO or whoever is commissioning the work has to say, publicly, in the room: “When this outcome turns red on the dashboard, we are going to fix the blocker, not fire the messenger.” And then they have to actually do it when the first outcome turns red. Because it will. That is the whole point. The red indicator is the system working correctly, surfacing the thing that needs to change.
Without that air cover, you are asking your transformation lead to put their career on the line for a metric they cannot fully control. Smart people will not do that. They will give you activity dashboards instead, and you will nod along for another eighteen months.
What I Ask in the Room
When I work with organizations, I start with the same question every time. I ask the sponsor: “Tell me the three outcomes you are committed to delivering in the next six months. Not activities. Outcomes. What is different about how this organization operates?”
If the answer comes back in activities (we are going to train five hundred people, we are going to run a hackathon, we are going to deploy AI tooling to all engineers), I know the initiative is already in trouble. Not because those activities are bad. Because nobody has decided what success looks like, so nobody can tell whether the activities are the right ones.
If the answer comes back in outcomes, now we can have a real conversation. What stands in the way? What has to change? Who has to approve it? What happens if they say no? Who has air cover to own this, and who gave it to them?
Those are the questions that actually matter. You cannot ask them until you know where you are going.
Your transformation dashboard has a number on it right now. It might be people trained, or workshops completed, or tools deployed, or pilots launched. Look at that number and ask yourself: if that number doubled tomorrow, would anything about how your organization operates actually change? Would your CFO be able to point to a line item on the P&L that moved? Would your COO see a process that runs differently? Would a single customer notice?
If the answer is no, you are not tracking progress. You are tracking comfort. And comfort has a carrying cost that someone is eventually going to ask you to justify.
What is different about your organization today that was not true six months ago? Not what happened. What changed. And who had the air cover to make it happen?
