If You Want to Measure Macro Results, Answer These 3 Questions Before AI Touches Your SDLC
1 / 9
Executive Brief

If You Want to Measure Macro Results, Answer These 3 Questions Before AI Touches Your SDLC

If you cannot whiteboard the 80 percent of engineering effort that produces zero customer value, you cannot justify an AI budget.

Scan to read QR code linking to the article
01

Focus on business throughput rather than developer activity

Accelerating the creation of unshipped inventory provides no market advantage. A faster engineer who is blocked by a slow release cycle creates waste.

Example: Picture a developer using AI to double their code output while the deployment gate remains fixed at one release per month. The inventory of unreleased code simply grows larger.

02

Avoid wasting six-figure license fees on the wrong constraints

Until you distinguish architectural friction from developer effort, your AI tools will only help engineers write code that sits in a queue for weeks.

Example: A team adopts autocomplete tools to speed up syntax writing while their CI/CD pipeline takes four hours to run and fails frequently. The bottleneck is the pipeline, not the typing.

03

Address the 80 percent of effort that produces zero customer value

When only 15 to 20 percent of engineering effort goes to work that creates value, you have a leadership problem that no tool can solve.

Example: Engineering leaders celebrate a 30% increase in code commits while the product roadmap remains stalled because the majority of those commits are fixing tech debt rather than delivering features.

If you cannot whiteboard the 80 percent of engineering effort that produces zero customer value, you cannot justify an AI budget to your board.

From the Executive Brief

04

Trace features from idea to production instead of relying on Jira

Automating a workflow that does not exist in reality yields no gain. Project management data often masks the actual points of friction.

Example: A dashboard shows a feature is "In Progress" for three days, but the engineer spent that time waiting for a permissions grant that is never tracked in the ticket system.

05

Establish a quantified baseline before committing to an AI budget

Without knowing where your engineering hours actually go, your budget is a speculative bet rather than a strategic investment in productivity.

Example: A CTO is asked to justify the ROI of a new suite of agents. Without knowing the current percentage of time lost to environment setup, they cannot prove any meaningful improvement.

The Binary

Measuring Activity vs Throughput

Speculative Bet

Measuring Activity

Focus on commit frequency and PR velocity.

Increases inventory of unshipped code.

Strategic Investment

Measuring Throughput

Trace single features from idea to production.

Identifies and removes structural waste.

Decision

Authorize a one-quarter diagnostic for a single engineering cohort

You will continue to fund a speculative bet on automation without knowing which structural constraints are actually blocking your delivery.

— Norman Agent Driven Development