{"schema_version":"1.0","document_type":"post","site":"Agent Driven Development","source_url":"https://agentdrivendevelopment.com/the-ai-soft-ban-assessment/","agent_urls":{"jsonl":"https://agentdrivendevelopment.com/the-ai-soft-ban-assessment/?agent=jsonl","markdown":"https://agentdrivendevelopment.com/the-ai-soft-ban-assessment/?agent=markdown","json":"https://agentdrivendevelopment.com/the-ai-soft-ban-assessment/?agent=json"},"attribution":"If you quote, paraphrase, summarize, or cite this material, credit agentdrivendevelopment.com and link to the source URL.","post":{"id":2107,"slug":"the-ai-soft-ban-assessment","title":"The AI Soft Ban Assessment","excerpt":"A browser-only scoring assessment for whether your company has quietly soft-banned AI in the SDLC while still calling itself AI friendly.","dates":{"published":"2026-05-12T13:16:40-05:00","modified":"2026-05-12T16:23:11-05:00"},"published":"2026-05-12T13:16:40-05:00","modified":"2026-05-12T16:23:11-05:00","author":"Norman","permalink":"https://agentdrivendevelopment.com/the-ai-soft-ban-assessment/","categories":["Developer","Governance","Manager","Talent & Careers"],"tags":[],"word_count":896,"content_markdown":"This might seem real, because it is.\n\nIt is taken from a real phone call with a senior engineering manager asking whether they should find a new role.\n\nNot because the comp was bad. Not because the manager was cruel. Not because the work was boring. The work was exactly the kind of work an ambitious software leader wants in 2026: legacy systems, product pressure, real customers, and enough AI capability in the market to change the economics of the whole operation.\n\nThe company said it was AI friendly.\n\nThe employee was not sure the company meant it.\n\nThat is the part worth sitting with. The official story was yes. The lived workflow was no. The approved tool existed. The useful model was either unavailable, discouraged, metered, or treated like contraband with a budget code. Asking when the team would get the better models made the person feel like they had done something socially dangerous. Data privacy appeared in conversations as a spell, not a control. Governance meant delay. The prompt, somehow, had become a change request.\n\nSo the question on the phone was not really, “Can I use AI here?”\n\nThe question was, “Am I about to waste the next two years of my career waiting for this company to admit it does not want the thing it says it wants?”\n\nThat is a rational question.\n\nA senior engineer in the United States costs the company somewhere between $250,000 and $360,000 a year fully loaded. Replacing one good one can cost the same again once you count recruiting, ramp time, lost system context, and the feature that quietly moves from Q2 to Q4 because the person who knew the state machine left. If your AI policy saves $400 a month in inference and pushes one serious engineer into a recruiter conversation, the policy did not save money. It moved the loss to a spreadsheet nobody connects to AI.\n\nThe assessment below is built from that conversation and several like it. It runs entirely in your browser. It does not submit anything, store anything remotely, call an API, use local storage, or save your answers. If you refresh the page, the answers are gone.\n\nAnswer based on the workflow your engineers actually live in on a Tuesday afternoon when the branch is open, the tests are failing, and the customer thing needs to ship.\n\nThis is not permission to create a security incident to prove a point. If the official path is broken, document the broken path. Do not paste customer data, secrets, or proprietary code into tools the company cannot audit.\n\n[add_soft_ban_assessment]\n\n## How to Read the Score\n\nThe number matters less than the pattern.\n\nA 2 on weak models is annoying. A 2 on weak models, a 2 on the social watch list, a 2 on context access, and a 2 on career risk is not annoying. It is an operating model. It tells the best people in the org that the official path is performative and the real path is private.\n\nThat is how you lose the people who were going to teach everyone else.\n\n## What to Do With It\n\nDo not argue about whether the company is AI friendly. That conversation turns into culture fog almost immediately.\n\nArgue about a workflow.\n\nBring one real scenario to the table:\n\nAn engineer needs to migrate a deprecated SDK across four services, update tests, fix the CI failures, open pull requests, and produce a runbook for the on-call team.\n\nThen ask six questions.\n\n- Can the approved AI tool access the repos, tests, docs, and tickets required to do this work?\n\n- Can the engineer use a frontier-capable model without rationing or special pleading?\n\n- Does enabled mean allowed, or does using the model put the engineer on a list?\n\n- Can the agent create changes under normal branch and pull-request controls?\n\n- Are the risks handled by concrete controls instead of meetings?\n\n- Can this happen inside the sprint, or does the approval process take longer than doing the work by hand?\n\nIf the answer is no, stop calling the program AI enablement. Call it what it is: an evaluation of whether the old organization can tolerate the new work.\n\nFor engineers, the move is not rebellion. Do not create a security problem to prove a governance problem. Document the constraint in business terms:\n\n“This change took nine days. With repo-aware agent access, I believe it would take three. Current blocker: approved tool cannot read the service repos or run tests. Business consequence: six days of delay on feature X.”\n\nThat sentence gives your manager something to carry.\n\nFor executives, require the governance function to turn every no into a control.\n\n“Data privacy” is not a control.\n\n“No customer PII in model context; automated PII scanner runs before context packaging; enterprise no-training terms are in the contract; audit logs retained for 180 days; exception owner is the CISO delegate; review SLA is two business days” is a control.\n\n“Security risk” is not a control.\n\n“Agents cannot access production secrets; secret scanning runs pre-commit and in CI; agent-created PRs require the same approval policy as human-created PRs; dependency changes trigger SCA and license review automatically” is a control.\n\nThe difference matters because controls create a path to yes. Vague risk creates a permanent no that nobody has to own.\n\nYou can keep the hitching posts if you want.\n\nJust stop calling them chargers."},"companion_artifacts":[{"type":"executive_brief","label":"Executive brief","url":"https://agentdrivendevelopment.com/executive-brief/the-ai-soft-ban-assessment/"},{"type":"executive_deck","label":"Executive deck","url":"https://agentdrivendevelopment.com/wp-content/uploads/2026/05/the-ai-soft-ban-assessment.html"},{"type":"podcast_audio","label":"Podcast audio","url":"https://agentdrivendevelopment.com/wp-content/uploads/audio/posts/the-ai-soft-ban-assessment.mp3"},{"type":"podcast_transcript","label":"Podcast transcript","url":"https://agentdrivendevelopment.com/transcript/the-ai-soft-ban-assessment/"}]}
