The word "resistance" is doing a lot of heavy lifting in enterprise AI conversations. When a team doesn't adopt an AI tool, leadership calls it resistance. When people raise concerns, that's resistance. When adoption metrics are low, the diagnosis is resistance. Gerson and I think the label is wrong, and the misdiagnosis leads to the wrong treatment.
What You Need to Know
- What gets labelled "resistance" is usually a rational response: unanswered questions, skill gaps, workflow mismatches, trust deficits, or identity threats
- Each type of pushback requires a different response. Treating them all as "resistance to overcome" applies the wrong solution to most of them
- Adjusting the environment (training, integration, communication) is more effective than trying to change people's attitudes
- Concerns about AI tools are data about how to improve adoption, not obstacles to push past
The Resistance Myth
Calling something "resistance" implies that the correct course of action is to overcome it. Push harder. Communicate more. Make adoption mandatory. This framing treats the workforce as an obstacle between the organisation and its AI goals.
But what gets labelled as resistance is usually something else entirely:
Rational concern. "I don't understand how this changes my role" is not resistance. It is a reasonable question that hasn't been answered.
Skill gap. "I don't know how to use this effectively" is not resistance. It is a training need.
Workflow mismatch. "This doesn't fit into how I actually work" is not resistance. It is a design problem.
Trust deficit. "I don't believe this tool's outputs are reliable" is not resistance. It is a quality issue.
Identity threat. "This makes me feel like my expertise doesn't matter" is not resistance. It is a psychological safety issue.
Each of these requires a different response. Treating them all as "resistance to be overcome" applies the wrong solution to most of them.
People are not opposed to AI. They are opposed to how AI is being introduced into their working lives.
Dr Gerson Tuazon
AI Strategy & Health Innovation
The Research Perspective
Tania's research methodology brings a useful lens: what does the evidence actually say?
The technology adoption literature is clear on several points:
Perceived usefulness predicts adoption. If people believe the tool will help them do their job better, they use it. If they don't, they don't. This is not resistance. It is rational evaluation.
Perceived ease of use predicts adoption. If the tool is easy to learn and fits into existing workflows, adoption is higher. If it requires significant behaviour change, adoption is lower. Again, rational evaluation.
Social influence matters. If respected peers use and endorse the tool, adoption increases. If nobody the user respects is using it, adoption stalls. This is social proof, not resistance.
Trust in the organisation matters. If the organisation has a track record of implementing technology well, with genuine support and honest communication, people give AI the benefit of the doubt. If the organisation has a track record of badly managed technology rollouts, scepticism is warranted.
What to Do Instead
Diagnose Before Treating
Before labelling low adoption as resistance, investigate what's actually happening. Talk to the people who aren't using the tool. Not in a town hall. In small, safe conversations where honest answers are possible.
Common findings:
- The training was too brief or too abstract
- The tool doesn't integrate with their actual workflow
- They tried it, got poor results, and stopped
- Nobody showed them how it applies to their specific tasks
- They have concerns about their role that haven't been addressed
Each finding points to a specific, addressable problem. None of them require "overcoming resistance."
Treat Concerns as Data
Every concern raised about an AI tool is information about how to make the tool work better. "The outputs aren't accurate enough" tells you the system needs improvement or the use case needs refinement. "I don't trust it" tells you the trust-building process is inadequate. "It takes longer than the old way" tells you the workflow integration needs work.
Organisations that treat concerns as data improve faster than organisations that treat concerns as resistance.
Adjust the Environment, Not the People
Instead of trying to change people's attitudes toward AI, change the conditions under which they encounter it:
- Better training, matched to actual skill levels
- Better workflow integration, designed with user input
- Better communication about role changes, with specifics not platitudes
- Better metrics, adjusted for the learning curve
- Better support, available when people actually need it
The language we use shapes the solutions we build. "Resistance" implies a force to be overcome. "Rational response to inadequate conditions" implies conditions to be improved. The second frame leads to better outcomes for everyone: the organisation, the AI programme, and the people whose work lives are changing.

