4 min read
This might seem real, because it is.
It is taken from a real phone call with a senior engineering manager asking whether they should find a new role.
Not because the comp was bad. Not because the manager was cruel. Not because the work was boring. The work was exactly the kind of work an ambitious software leader wants in 2026: legacy systems, product pressure, real customers, and enough AI capability in the market to change the economics of the whole operation.
The company said it was AI friendly.
The employee was not sure the company meant it.
That is the part worth sitting with. The official story was yes. The lived workflow was no. The approved tool existed. The useful model was either unavailable, discouraged, metered, or treated like contraband with a budget code. Asking when the team would get the better models made the person feel like they had done something socially dangerous. Data privacy appeared in conversations as a spell, not a control. Governance meant delay. The prompt, somehow, had become a change request.
So the question on the phone was not really, “Can I use AI here?”
The question was, “Am I about to waste the next two years of my career waiting for this company to admit it does not want the thing it says it wants?”
That is a rational question.
A senior engineer in the United States costs the company somewhere between $250,000 and $360,000 a year fully loaded. Replacing one good one can cost the same again once you count recruiting, ramp time, lost system context, and the feature that quietly moves from Q2 to Q4 because the person who knew the state machine left. If your AI policy saves $400 a month in inference and pushes one serious engineer into a recruiter conversation, the policy did not save money. It moved the loss to a spreadsheet nobody connects to AI.
The assessment below is built from that conversation and several like it. It runs entirely in your browser. It does not submit anything, store anything remotely, call an API, use local storage, or save your answers. If you refresh the page, the answers are gone.
Answer based on the workflow your engineers actually live in on a Tuesday afternoon when the branch is open, the tests are failing, and the customer thing needs to ship.
This is not permission to create a security incident to prove a point. If the official path is broken, document the broken path. Do not paste customer data, secrets, or proprietary code into tools the company cannot audit.
Browser-only assessment
Did your company accidentally ban AI in the SDLC?
Answer all 13 questions. Score 0 when the system enables the work. Score 2 when the workflow says no while the slide says yes.
Answer the assessment.
Your score will update as you choose answers. Complete all 13 questions before treating the result as meaningful.
13 questions left.
How to Read the Score
The number matters less than the pattern.
Most readers also read: The Engineers Who Can’t Use AI Agents Don’t Have a Tools Problem
A 2 on weak models is annoying. A 2 on weak models, a 2 on the social watch list, a 2 on context access, and a 2 on career risk is not annoying. It is an operating model. It tells the best people in the org that the official path is performative and the real path is private.
That is how you lose the people who were going to teach everyone else.
What to Do With It
Do not argue about whether the company is AI friendly. That conversation turns into culture fog almost immediately.
Argue about a workflow.
Bring one real scenario to the table:
An engineer needs to migrate a deprecated SDK across four services, update tests, fix the CI failures, open pull requests, and produce a runbook for the on-call team.
Then ask six questions.
- Can the approved AI tool access the repos, tests, docs, and tickets required to do this work?
- Can the engineer use a frontier-capable model without rationing or special pleading?
- Does enabled mean allowed, or does using the model put the engineer on a list?
- Can the agent create changes under normal branch and pull-request controls?
- Are the risks handled by concrete controls instead of meetings?
- Can this happen inside the sprint, or does the approval process take longer than doing the work by hand?
If the answer is no, stop calling the program AI enablement. Call it what it is: an evaluation of whether the old organization can tolerate the new work.
For engineers, the move is not rebellion. Do not create a security problem to prove a governance problem. Document the constraint in business terms:
“This change took nine days. With repo-aware agent access, I believe it would take three. Current blocker: approved tool cannot read the service repos or run tests. Business consequence: six days of delay on feature X.”
That sentence gives your manager something to carry.
For executives, require the governance function to turn every no into a control.
“Data privacy” is not a control.
“No customer PII in model context; automated PII scanner runs before context packaging; enterprise no-training terms are in the contract; audit logs retained for 180 days; exception owner is the CISO delegate; review SLA is two business days” is a control.
“Security risk” is not a control.
“Agents cannot access production secrets; secret scanning runs pre-commit and in CI; agent-created PRs require the same approval policy as human-created PRs; dependency changes trigger SCA and license review automatically” is a control.
The difference matters because controls create a path to yes. Vague risk creates a permanent no that nobody has to own.
You can keep the hitching posts if you want.
Just stop calling them chargers.
Companion
