Trust systems, not just agents. Establish a delivery system that measures risk, checks outcomes, and reduces over-verification.
Trust is defined by evidence and system properties, not familiarity. Any actor—human or machine—requires verification proportionate to risk.
Example: A new team member and a long-standing veteran both submit changes. The system should apply verification based on the change's risk, not solely on who submitted it.
Code review is a mechanism for surfacing missing trust, not for creating it.
From the Executive Brief
If review is the only trust mechanism, the system has a bottleneck, not a process. Reviews validate existing trust, they do not establish it from scratch.
Example: A new feature branch undergoes extensive peer review. If the review is the only gate for quality, the team spends excessive time scrutinizing minor details, rather than relying on automated tests or deployment safeguards.
The risk classification of a change, not its author, determines the necessary verification bar. Low-risk changes require automated checks; high-risk changes demand robust human oversight and adversarial testing.
Example: A documentation update and a core database schema change are both submitted. The system automatically merges the former after linting, while the latter requires multiple senior engineers and dedicated test environments.
The cost of distrust manifests in review burden, slowed cycle time, and diverted senior-engineer attention. Organizations incur significant expense when they over-verify low-risk changes.