Change management for AI is usually built on instinct and experience. Communication plans, training programmes, stakeholder maps. These are useful. They're also insufficient. There's a growing body of research on what actually drives technology adoption in enterprises, and most change management programmes ignore it.
What You Need to Know
- Evidence-based change management uses research findings, not just practitioner instinct, to design adoption interventions
- The research identifies specific factors that predict adoption success: perceived usefulness, perceived ease of use, social influence, and facilitating conditions
- Most change programmes over-invest in training (which addresses competence) and under-invest in social influence and visible value (which drive motivation)
- Measuring change interventions against outcomes, not just completion, is the shift from practitioner-based to evidence-based practice
3.2x
stronger predictor: perceived usefulness vs training quality in driving sustained adoption
Source: Venkatesh et al., UTAUT2 Model, 2012
48%
of change management interventions are never evaluated for effectiveness
Source: CMI, 2024
The Research Foundation
The Technology Acceptance Model (Davis, 1989) and its successor UTAUT (Venkatesh et al., 2003) identify four primary factors that predict whether people adopt new technology:
Performance Expectancy: "Will this make my work better?" Not easier. Better. People adopt technology they believe improves their outcomes.
Effort Expectancy: "How hard is this to use?" The lower the effort required, the higher the adoption. This is why user experience design matters as much as technical capability.
Social Influence: "Are people I respect using this?" Peer behaviour is a stronger driver of adoption than mandate or training. When a trusted colleague uses AI, others follow.
Facilitating Conditions: "Is the environment set up for me to succeed?" Access to tools, support, time, and resources.
The evaluation research is clear: interventions designed around evidence produce measurably better outcomes than those designed on intuition alone. The same applies to change management. If we're asking organisations to adopt AI based on evidence, we should be managing that adoption based on evidence too.
Dr Tania Wolfgramm
Chief Research Officer
What This Means for Practice
Invest More in Perceived Usefulness
Most change programmes allocate the majority of their budget to training (which addresses effort expectancy) and communication (which addresses awareness). The research says the strongest predictor of adoption is perceived usefulness.
Practical translation: Don't lead with "here's how to use AI." Lead with "here's how AI makes your specific job better." And make it concrete. "AI will pre-classify your incoming documents so you start each morning with a sorted queue instead of an unsorted pile." That's perceived usefulness, specific to their workflow.
Invest More in Social Influence
The second most underinvested factor. People adopt technology when they see peers they respect using it. The champion model works because it leverages social influence directly.
Practical translation: Don't just train champions. Make their usage visible. Create forums for peer demonstration. Let early success stories spread organically. The claims processor who shows their colleague how AI saves them an hour a day is more persuasive than any executive mandate.
Measure What Predicts Adoption
Most change programmes measure activities: training sessions delivered, communications sent, stakeholders engaged. These are process metrics, not outcome predictors.
Better measures:
- Perceived usefulness score (monthly survey, per team)
- Perceived ease of use score
- Social influence indicators (how many people learned about AI from a colleague vs from training?)
- Facilitating conditions score (do people have the tools, time, and support to use AI?)
These predict adoption 4-8 weeks in advance. When perceived usefulness drops, usage drops later. Early intervention on the leading indicator prevents adoption decline.
Evaluate Your Change Interventions
Evaluation should be built into every change intervention from the start, not appended at the end. The question isn't "did we deliver the programme?" It's "did the programme produce the outcome it was designed for?" That distinction is the difference between activity and impact.
Dr Tania Wolfgramm
Chief Research Officer
For each change intervention (a training programme, a communication campaign, a champion network), define:
- What outcome it's designed to produce
- How you'll measure that outcome
- When you'll measure it
- What you'll change if the outcome isn't achieved
This turns change management from an art into a discipline.
The Governance Integration
Evidence-based change management connects naturally to AI governance. Both are about building trustworthy, sustainable systems. The same evaluation rigour that governs AI model performance should govern the human adoption of AI systems.
When organisations evaluate their change programmes with the same discipline they evaluate their technology, the entire AI programme becomes more robust, more credible, and more likely to deliver sustained value.
The gap between what research tells us about technology adoption and what most enterprises do about it is significant. Closing that gap doesn't require more budget. It requires shifting investment from activities to outcomes, from intuition to evidence, and from training to the factors that actually predict whether people use the technology you build.

