Companies keep mandating AI adoption. Nearly a third of workers are responding by sabotaging it.
A survey from Workplace Intelligence and enterprise AI platform Writer (which has a commercial interest here, but the findings track with broader trends) found that 29% of employees admit to deliberately undermining their company's AI strategy — feeding data into unapproved tools, using unsanctioned apps, or flat-out refusing to use what they've been told to adopt. Gen Z leads at 44%.
The threats behind these mandates are real: 60% of companies plan to lay off employees who won't adopt AI, and 77% won't consider non-proficient workers for promotions. But the strategy driving those threats? Leadership doesn't even believe in it.
According to Writer's enterprise AI report:
- 75% of executives say their company's AI strategy is "more for show" than real guidance
- Nearly half call the entire effort a "massive disappointment," up from 34% last year
- 69% of companies are planning AI-related layoffs — while 39% lack a formal strategy to make money from the tools in the first place
So workers are being threatened with termination over a strategy their own bosses admit is performative. And as HR Dive reported, those broad AI goals from leadership rarely translate into actual guidance on the ground — leaving workers to guess, hit one bad experience, and revert to old routines.
Rather than rethinking the rollout, most companies are doubling down on the workers who already get it. The survey found that 92% of C-suite executives are cultivating a class of "AI elite" employees who are roughly three times more likely to have been promoted. The gap between super-users and everyone else is widening fast.

You can't threaten people into using tools they don't understand under a strategy the C-suite itself calls performative and expect anything other than resentment. The companies that get real returns from AI will be the ones that treat it as a collaboration problem, not a compliance one — with genuine training and an honest answer to the question nobody at the top seems willing to ask: what exactly are we trying to do with this?
