Skip to main content

Change Management for AI Adoption

AI adoption is a change management problem first, a technology problem second. A framework for getting enterprise teams to actually use AI tools.
28 April 2023·7 min read
Tim Hatherley-Greene
Tim Hatherley-Greene
Chief Operating Officer
The technology works. The business case is sound. The pilot was a success. And six months later, nobody's using it. This is the pattern we see more than any other in enterprise technology, and AI is making it worse because the gap between "impressive demo" and "adopted tool" has never been wider.

The Adoption Gap

Every enterprise technology initiative faces a change management challenge. AI makes it harder for three specific reasons:
Fear. AI triggers job security concerns in a way that previous technology waves didn't. When the tool can write, analyse, and reason, "this will make your job easier" sounds a lot like "this will make your job unnecessary." Even when that's not the intent, the perception is real and it shapes behaviour.
Trust. AI makes mistakes confidently. For professionals who are accountable for their work, using a tool that hallucinates is a real risk. The rational response to unreliable AI isn't resistance - it's caution. And caution, without a framework, looks like non-adoption.
Disruption. AI doesn't slot neatly into existing workflows. It changes how work is done, not just what tools are used. Document review becomes AI-assisted document review. Research becomes AI-augmented research. Each of these is a workflow redesign, and people don't adopt workflow changes just because someone told them to.

The Framework

I've spent enough time in enterprise change programmes to know that adoption doesn't happen through mandates. It happens through design. Here's the framework we're developing for AI specifically.

1. Start with the Pain, Not the Technology

Don't lead with "we're implementing AI." Lead with "we're fixing the thing that frustrates you."
If claims assessors spend 40% of their time searching for policy documents, the conversation isn't about AI. It's about reducing search time. AI happens to be the mechanism, but the value proposition is "less time hunting, more time assessing." That framing changes everything about how people receive the change.

2. Involve the Experts Early

The people who'll use the AI system are the domain experts. They know the edge cases, the workarounds, the unwritten rules. Excluding them from design and testing is the fastest way to build something that technically works and practically fails.
Bring them in during the pilot. Let them break it. Let them tell you where it's wrong. This does two things: it produces a better system, and it creates advocates who feel ownership rather than imposition.
The worst adoption outcomes I've seen always share one trait: the tool was designed without the people who have to use it. Every time.
Tim Hatherley-Greene
Chief Operating Officer

3. Design the Workflow, Not Just the Tool

AI doesn't work in isolation. It works within a workflow. And the workflow design determines whether people actually use it.
Key questions:
  • Where does AI output go? Into a review queue? Directly to the customer? Into a decision support dashboard?
  • Who reviews AI output? Is there a human-in-the-loop step? When can it be skipped?
  • What happens when the AI is wrong? Is there a clear escalation path? Is correction easy?
  • How does feedback flow back? Can users flag errors in a way that improves the system?
If the workflow is clunky, people will route around it regardless of how good the AI is.

4. Measure Adoption, Not Just Accuracy

Most AI pilots measure accuracy: how often is the AI correct? That matters, but it's not enough. You also need to measure:
  • Usage rate. What percentage of eligible users are actually using the tool?
  • Workflow completion. When users start with the AI, do they finish with it, or do they abandon and revert to the old method?
  • Time-to-value. How long does it take a new user to get genuine value from the tool?
  • Sentiment. Do users trust the tool? Do they find it useful? What would make it better?
A system with 95% accuracy and 20% adoption delivers less value than a system with 85% accuracy and 80% adoption.
70%
of enterprise change programmes fail to achieve their objectives, a figure that has been remarkably consistent for 20 years
Source: McKinsey, Transformation Survey, 2023

5. Create Champions, Not Mandates

Top-down mandates create compliance, not adoption. Champions create culture change.
Find the early adopters - the people who are genuinely excited about AI and willing to experiment. Give them extra support, early access, and a platform to share what they learn. Peer influence is more powerful than management directives.
One genuine advocate in a team of 20 will drive more adoption than an all-staff email from the CEO.

6. Be Honest About Limitations

Nothing destroys trust faster than overselling. If the AI makes mistakes, say so. If it doesn't handle edge cases well, say so. If it's a first version that will improve, say so.
People can work with imperfect tools when they understand the limitations. They can't work with tools they've been told are reliable and then discover aren't.

The Timeline

Enterprise AI adoption doesn't happen in a sprint. Plan for:
Month 1-2: Pilot with enthusiastic volunteers. Iterate on workflow design. Measure everything.
Month 3-4: Expand to early adopters. Refine based on pilot feedback. Build training materials from real user experiences, not vendor documentation.
Month 5-6: Broader rollout. Champions programme active. Clear support channels. Ongoing measurement.
Month 7+: Continuous improvement. Regular check-ins with users. Iteration on the AI and the workflow.
The organisations that treat AI adoption as a project (launch and move on) will fail. The ones that treat it as a programme (continuous, iterative, human-centred) will succeed.