Skip to main content

The Adoption Psychology Playbook

Why teams adopt or resist AI. Psychology research turned into a practical playbook for leaders navigating AI change management.
20 March 2025·9 min read
Tim Hatherley-Greene
Tim Hatherley-Greene
Chief Operating Officer
Dr Gerson Tuazon
Dr Gerson Tuazon
AI Strategy & Health Innovation
Gerson spent three years researching why people adopt or resist new technology in organisational settings. I spent a decade watching teams navigate change. When we compared notes, the patterns were almost identical. The gap was not between research and practice. It was between what organisations think drives adoption and what actually does.

What You Need to Know

  • Resistance to AI is rational, not irrational. People resist AI for legitimate reasons: job security concerns, loss of autonomy, forced workflow changes, and past experience with failed technology rollouts. Treating resistance as ignorance guarantees failure.
  • Adoption follows psychological safety, not training volume. Teams adopt AI when they feel safe to experiment, fail, and ask questions. More training sessions do not fix a culture problem.
  • The "early adopter" strategy backfires in AI. Targeting enthusiasts first creates a perception gap where AI becomes "that thing the tech people use." Broad, low-stakes exposure works better.
  • Identity threat is the hidden blocker. When AI touches tasks that define someone's professional identity ("I'm the person who analyses the data"), resistance becomes personal and deep. This requires different change management than capability-based resistance.

The Research Foundation

Gerson's research draws on self-determination theory, technology acceptance models, and organisational psychology. The core finding is straightforward: technology adoption is not primarily a skills problem. It is a motivation problem. And motivation is shaped by three psychological needs: autonomy, competence, and relatedness.
Autonomy means feeling that you have choice. When AI is imposed on people without input, it threatens autonomy regardless of how good the technology is.
Competence means feeling capable. AI can make experts feel like beginners, particularly when the AI handles tasks they spent years mastering.
Relatedness means feeling connected to others. When AI changes team dynamics or removes collaborative work, it disrupts the social fabric that makes work meaningful.
Every failed AI rollout I studied had the same pattern: leadership treated adoption as a training problem when it was actually a motivation problem.
Dr Gerson Tuazon
AI Strategy & Health Innovation

The Five Resistance Patterns

From combining Gerson's research with field observations, we see five distinct resistance patterns. Each requires a different response.

1. Capability Anxiety

What it looks like: "I don't know how to use this." Hesitation, avoidance, excessive help-seeking.
What is actually happening: The person feels incompetent. They are afraid of looking foolish in front of colleagues. The AI makes them feel like a beginner in their own domain.
What works: Peer learning in small groups. Not formal training, but spaces where people can experiment together without judgement. Pair an uncertain user with a patient colleague, not an AI champion. The goal is normalising the learning curve, not accelerating through it.

2. Identity Threat

What it looks like: "This is not what I was hired to do." Philosophical objections, arguments about quality, refusal to engage even after training.
What is actually happening: The AI is threatening the person's professional identity. They have built expertise and reputation around skills that AI now performs. This is an existential concern, not a skills gap.
What works: Reframe the role, not the tool. Help the person see how their expertise is elevated by AI, not replaced by it. The analyst does not become an AI operator. They become the person who ensures AI analysis is correct, contextualised, and actionable. That requires more expertise, not less.

3. Trust Deficit

What it looks like: "How do I know it's right?" Constant checking, refusal to act on AI recommendations, manual workarounds.
What is actually happening: The person does not trust the AI's outputs and does not have a framework for evaluating them. Past experience with unreliable technology amplifies this.
What works: Transparent evaluation. Show the person how the AI was tested, where it fails, and what the error rates are. Give them permission and tools to verify outputs. Trust builds through evidence, not reassurance.

4. Autonomy Loss

What it looks like: "Nobody asked me." Compliance without engagement, doing the minimum required, passive resistance.
What is actually happening: The AI was imposed without input. The person feels like a recipient of change rather than a participant in it.
What works: Involve people in the deployment decisions. Not in the model selection (they do not care), but in the workflow design. How should the AI fit into their existing process? What tasks should it handle? What should it not touch? When people co-design the implementation, adoption follows.

5. Social Disruption

What it looks like: "It's not the same anymore." Nostalgia for old processes, complaints about team dynamics, withdrawal from collaborative work.
What is actually happening: AI has changed the social structure of work. Tasks that were collaborative are now individual. Knowledge that was shared through conversation is now retrieved from a system. The relational fabric of the team has been altered.
What works: Deliberately redesign collaboration around AI. If AI handles data retrieval, create new spaces for the team to discuss and interpret the data together. The work changes, but the connections need to persist.

The Playbook

Phase 1: Psychological Safety (Weeks 1-4)

Before any AI deployment, establish that experimentation is safe. Leadership must explicitly communicate that using AI poorly is expected and acceptable during the learning period. No performance metrics tied to AI usage in the first quarter.
Create low-stakes AI exposure. Let people use AI for personal tasks (writing emails, summarising documents, brainstorming ideas) before introducing it into core workflows. This builds familiarity without threatening professional identity.

Phase 2: Co-Design (Weeks 4-8)

Involve the people who will use the AI in designing how it fits their workflow. Not a suggestion box. Working sessions where their input directly shapes the implementation. This addresses autonomy concerns and surfaces identity threats early.
Map the resistance patterns. You will see all five in most teams. Identify which patterns are dominant and design your change approach accordingly.

Phase 3: Supported Launch (Weeks 8-12)

Deploy with peer support structures, not just training. Pair users with colleagues, not AI champions. Create weekly spaces for sharing experiences, frustrations, and discoveries.
Monitor adoption patterns, not just usage metrics. High usage with low satisfaction is worse than moderate usage with genuine engagement. Watch for compliance-without-engagement, the most common failure pattern.

Phase 4: Identity Evolution (Months 3-6)

This is where most programmes stop and where the real work begins. Help people redefine their professional identity in the context of AI. What does "expert analyst" mean when AI handles the analysis? What does "experienced advisor" mean when AI has the knowledge base?
The answer, consistently, is that human expertise becomes about judgement, context, relationships, and the things AI cannot do. But people need help seeing and believing that.

What Leaders Get Wrong

The most common mistake is treating AI adoption as a technology rollout. It is not. It is an organisational change programme that happens to involve technology. The technology is the easy part. The psychology is the hard part.
The second most common mistake is impatience. Sustainable AI adoption takes six to twelve months, not six to twelve weeks. Organisations that rush past the psychological work end up with high AI usage numbers and low actual value, because people are using the tool without trusting it, adapting it, or integrating it into their real work.
The Adoption Test
Ask your team: "Do you feel safe experimenting with AI and failing?" If the answer is no, you have a culture problem that no amount of training will fix. Start there.