Skip to content
, , ,

The AI Soft Ban Assessment

A browser-only scoring assessment for whether your company has quietly soft-banned AI in the SDLC while still calling itself AI friendly.

·

Executive DeckListen

Let your agent read this

Executive briefClick to expand

Organizations that fail to integrate AI into engineering workflows risk talent attrition and unacknowledged economic loss.

Treat AI integration as a capital investment, not an operational overhead.

  • The total cost of an engineer, inclusive of hiring, onboarding, and lost context, far exceeds their base compensation; policies that impede productivity incur substantial, often unmeasured, economic penalties.
  • Enterprise governance and control frameworks must enable new capabilities through concrete controls rather than restrict innovation through abstract risk statements.
  • The absence of adequate tooling and access to frontier models acts as a de facto soft ban, disincentivizing adoption and signaling to high-performing engineers that official policy is performative.
  • Efficient AI integration measures success by reducing delivery timelines; a two-thirds reduction in time required for complex changes demonstrates significant potential for improved throughput.

The critical assessment for AI adoption is whether the current organizational structure can accommodate and benefit from new ways of working.

Read the full executive package →

Pen doodle illustration for the-ai-soft-ban-assessment

4 min read

This might seem real, because it is.

It is taken from a real phone call with a senior engineering manager asking whether they should find a new role.

Not because the comp was bad. Not because the manager was cruel. Not because the work was boring. The work was exactly the kind of work an ambitious software leader wants in 2026: legacy systems, product pressure, real customers, and enough AI capability in the market to change the economics of the whole operation.

The company said it was AI friendly.

The employee was not sure the company meant it.

That is the part worth sitting with. The official story was yes. The lived workflow was no. The approved tool existed. The useful model was either unavailable, discouraged, metered, or treated like contraband with a budget code. Asking when the team would get the better models made the person feel like they had done something socially dangerous. Data privacy appeared in conversations as a spell, not a control. Governance meant delay. The prompt, somehow, had become a change request.

So the question on the phone was not really, “Can I use AI here?”

The question was, “Am I about to waste the next two years of my career waiting for this company to admit it does not want the thing it says it wants?”

That is a rational question.

A senior engineer in the United States costs the company somewhere between $250,000 and $360,000 a year fully loaded. Replacing one good one can cost the same again once you count recruiting, ramp time, lost system context, and the feature that quietly moves from Q2 to Q4 because the person who knew the state machine left. If your AI policy saves $400 a month in inference and pushes one serious engineer into a recruiter conversation, the policy did not save money. It moved the loss to a spreadsheet nobody connects to AI.

The assessment below is built from that conversation and several like it. It runs entirely in your browser. It does not submit anything, store anything remotely, call an API, use local storage, or save your answers. If you refresh the page, the answers are gone.

Answer based on the workflow your engineers actually live in on a Tuesday afternoon when the branch is open, the tests are failing, and the customer thing needs to ship.

This is not permission to create a security incident to prove a point. If the official path is broken, document the broken path. Do not paste customer data, secrets, or proprietary code into tools the company cannot audit.

Browser-only assessment

Did your company accidentally ban AI in the SDLC?

Answer all 13 questions. Score 0 when the system enables the work. Score 2 when the workflow says no while the slide says yes.

Current score 0/26 Answer the questions below.
0 of 13 answered No submit. No remote storage. No saved answers.
1. What is the best model engineers can use without whispering?
2. Do enabled models also come with a little social warning label?
3. How much does token-cost anxiety shape engineering behavior?

Move the slider to answer.

4. How long is the path from "we should use an agent" to approved use?

Move the slider to answer.

5. Does the prompt somehow become a CAB item?
6. What happens when someone asks for the current models?
7. Who actually owns the no?
8. How much better is the shadow workflow than the approved workflow?

Move the slider to answer.

9. Can the agent see enough context to be useful?

Move the slider to answer.

10. Are the evals measuring work, or are they measuring comfort?
11. Does governance move at software speed?

Move the slider to answer.

12. Is visible AI usage career-positive?
13. What disaster story keeps hijacking the room?

Answer the assessment.

Your score will update as you choose answers. Complete all 13 questions before treating the result as meaningful.

13 questions left.

How to Read the Score

The number matters less than the pattern.

Most readers also read: The Engineers Who Can’t Use AI Agents Don’t Have a Tools Problem

A 2 on weak models is annoying. A 2 on weak models, a 2 on the social watch list, a 2 on context access, and a 2 on career risk is not annoying. It is an operating model. It tells the best people in the org that the official path is performative and the real path is private.

That is how you lose the people who were going to teach everyone else.

What to Do With It

Do not argue about whether the company is AI friendly. That conversation turns into culture fog almost immediately.

Argue about a workflow.

Bring one real scenario to the table:

An engineer needs to migrate a deprecated SDK across four services, update tests, fix the CI failures, open pull requests, and produce a runbook for the on-call team.

Then ask six questions.

  1. Can the approved AI tool access the repos, tests, docs, and tickets required to do this work?
  2. Can the engineer use a frontier-capable model without rationing or special pleading?
  3. Does enabled mean allowed, or does using the model put the engineer on a list?
  4. Can the agent create changes under normal branch and pull-request controls?
  5. Are the risks handled by concrete controls instead of meetings?
  6. Can this happen inside the sprint, or does the approval process take longer than doing the work by hand?

If the answer is no, stop calling the program AI enablement. Call it what it is: an evaluation of whether the old organization can tolerate the new work.

For engineers, the move is not rebellion. Do not create a security problem to prove a governance problem. Document the constraint in business terms:

“This change took nine days. With repo-aware agent access, I believe it would take three. Current blocker: approved tool cannot read the service repos or run tests. Business consequence: six days of delay on feature X.”

That sentence gives your manager something to carry.

For executives, require the governance function to turn every no into a control.

“Data privacy” is not a control.

“No customer PII in model context; automated PII scanner runs before context packaging; enterprise no-training terms are in the contract; audit logs retained for 180 days; exception owner is the CISO delegate; review SLA is two business days” is a control.

“Security risk” is not a control.

“Agents cannot access production secrets; secret scanning runs pre-commit and in CI; agent-created PRs require the same approval policy as human-created PRs; dependency changes trigger SCA and license review automatically” is a control.

The difference matters because controls create a path to yes. Vague risk creates a permanent no that nobody has to own.

You can keep the hitching posts if you want.

Just stop calling them chargers.

Companion

Written by

The views and opinions expressed in this article are the author’s own and do not represent the positions of any employer, client, or affiliated organization.

Every article, narrated. Listen while you ship.
From the Author

Essential or Ornamental

Three companies. Three choices. One satisfactory ending.

One does nothing. One maps the waste. One bets everything on twelve people in a warehouse.

Read free online →

Listen

6 min listenDownload

One useful note a week

Get one good email a week.

Short notes on AI-native software leadership. No launch sequence. No funnel theater.