This might seem real, because it is.
It is taken from a real phone call with a senior engineering manager asking whether they should find a new role.
Not because the compensation was bad. Not because the manager was cruel. Not because the work was boring. The work was exactly the kind of work an ambitious software leader wants in twenty twenty-six. Legacy systems, product pressure, real customers, and enough AI capability in the market to change the economics of the whole operation.
The company said it was AI friendly.
The employee was not sure the company meant it.
That is the part worth sitting with. The official story was yes. The lived workflow was no. The approved tool existed. The useful model was either unavailable, discouraged, metered, or treated like contraband with a budget code. Asking when the team would get the better models made the person feel like they had done something socially dangerous. Data privacy appeared in conversations as a spell, not a control. Governance meant delay. The prompt, somehow, had become a change request.
So the question on the phone was not really, "Can I use AI here?"
The question was, "Am I about to waste the next two years of my career waiting for this company to admit it does not want the thing it says it wants?"
That is a rational question.
A senior engineer in the United States costs the company somewhere between two hundred fifty thousand dollars and three hundred sixty thousand dollars a year fully loaded. Replacing one good one can cost the same again once you count recruiting, ramp time, lost system context, and the feature that quietly moves from quarter two to quarter four because the person who knew the state machine left. If your AI policy saves four hundred dollars a month in inference and pushes one serious engineer into a recruiter conversation, the policy did not save money. It moved the loss to a spreadsheet nobody connects to AI.
The assessment below is built from that conversation and several like it. It runs entirely in your browser. It does not submit anything, store anything remotely, call an API, use local storage, or save your answers. If you refresh the page, the answers are gone.
Answer based on the workflow your engineers actually live in on a Tuesday afternoon when the branch is open, the tests are failing, and the customer thing needs to ship.
This is not permission to create a security incident to prove a point. If the official path is broken, document the broken path. Do not paste customer data, secrets, or proprietary code into tools the company cannot audit.
To read the score, the number matters less than the pattern.
A two on weak models is annoying. A two on weak models, a two on the social watchlist, a two on context access, and a two on career risk is not annoying. It is an operating model. It tells the best people in the organization that the official path is performative and the real path is private.
That is how you lose the people who were going to teach everyone else.
What to do with it? Do not argue about whether the company is AI friendly. That conversation turns into culture fog almost immediately.
Argue about a workflow.
Bring one real scenario to the table.
An engineer needs to migrate a deprecated Software Development Kit across four services, update tests, fix the Continuous Integration failures, open pull requests, and produce a runbook for the on-call team.
Then ask six questions. First, can the approved AI tool access the repositories, tests, documentation, and tickets required to do this work? Second, can the engineer use a frontier-capable model without rationing or special pleading? Third, does enabled mean allowed, or does using the model put the engineer on a list? Fourth, can the agent create changes under normal branch and pull-request controls? Fifth, are the risks handled by concrete controls instead of meetings? And sixth, can this happen inside the sprint, or does the approval process take longer than doing the work by hand?
If the answer is no, stop calling the program AI enablement. Call it what it is: an evaluation of whether the old organization can tolerate the new work.
For engineers, the move is not rebellion. Do not create a security problem to prove a governance problem. Document the constraint in business terms.
"This change took nine days. With repository-aware agent access, I believe it would take three. Current blocker: approved tool cannot read the service repositories or run tests. Business consequence: six days of delay on feature X."
That sentence gives your manager something to carry.
For executives, require the governance function to turn every no into a control.
"Data privacy" is not a control.
"No customer Personally Identifiable Information in model context; automated PII scanner runs before context packaging; enterprise no-training terms are in the contract; audit logs retained for one hundred eighty days; exception owner is the security chief delegate; review Service Level Agreement is two business days" is a control.
"Security risk" is not a control.
"Agents cannot access production secrets; secret scanning runs pre-commit and in Continuous Integration; agent-created pull requests require the same approval policy as human-created pull requests; dependency changes trigger Software Composition Analysis and license review automatically" is a control.
The difference matters because controls create a path to yes. Vague risk creates a permanent no that nobody has to own.
You can keep the hitching posts if you want.
Just stop calling them chargers.