We published a piece earlier this year on the perspective-taking advantage in AI adoption. Since then, Gerson and I have worked with four more enterprises on AI rollouts. The pattern is sharper now: there is a measurable gap between what leaders think their teams experience during AI adoption and what those teams actually experience. We're calling it the empathy gap, and it's the most predictable cause of AI adoption failure we've encountered.
What You Need to Know
- The empathy gap in AI adoption is the difference between leadership's perception of the workforce experience and the actual workforce experience. It is consistently larger than leaders expect.
- Leaders overestimate excitement and underestimate anxiety. In our assessments, leaders rate workforce enthusiasm for AI at roughly double what the workforce itself reports.
- The gap is not caused by bad leadership. It's caused by information asymmetry. Leaders receive filtered information (people tell them what they want to hear) and project their own relationship with AI onto others.
- Closing the gap requires structured listening, not better communication. The problem is not that leaders are saying the wrong things. It's that they're not hearing the right things.
2x
average gap between leadership's estimate of workforce AI enthusiasm and the workforce's self-reported enthusiasm, across four enterprise AI assessments in 2024
Source: RIVER advisory assessments, 2024
What We're Seeing
The Leadership Bubble
In every enterprise AI engagement we've run this year, we've asked two groups the same questions about the upcoming AI initiative:
Leaders: "How do you think your team feels about AI?" Typical responses: "Excited but nervous." "Cautiously optimistic." "Ready to get started."
The team: "How do you feel about AI?" Typical responses: "I don't really understand what's going to change." "Nobody's asked me what I think." "I'm worried about my job but I can't say that."
The gap is not subtle. Leaders consistently perceive higher enthusiasm, lower anxiety, and greater understanding than actually exists. This is not because leaders are out of touch. It's because the information they receive is filtered through organisational dynamics that make honest upward communication difficult.
People don't tell their boss they're scared of AI. They tell their boss they're "interested to learn more." The boss hears enthusiasm. The employee means survival.
The Projection Problem
Gerson's research identifies a specific cognitive mechanism at work. Leaders who are personally excited about AI project that excitement onto their teams. Leaders who have resolved their own anxiety about AI assume their teams have done the same.
This is perspective-taking failure. Not malicious. Not even conscious. But consequential. Because the AI adoption strategy that follows is designed for the workforce the leader imagines, not the workforce that actually exists.
An adoption strategy designed for "cautiously optimistic" people includes training, support, and gradual rollout. An adoption strategy designed for "anxious, uncertain, and unasked" people includes different things: early involvement, honest conversations about job impact, psychological safety for expressing concerns, and visible evidence that feedback actually changes outcomes.
The first strategy is inadequate for the real workforce. The second strategy would address the actual situation. The empathy gap means most enterprises default to the first.
Every leader I've worked with genuinely cares about their team. You can care deeply and still be wrong about what your people are experiencing.
Tim Hatherley-Greene
Chief Operating Officer
The Psychology
Gerson frames the empathy gap through three cognitive patterns that have been well-established in organisational psychology research:
Curse of Knowledge
Once you understand something, you cannot easily remember what it was like to not understand it. Leaders who have spent months learning about AI, attending conferences, reading reports, and talking to vendors have a sophisticated understanding of what AI can and cannot do. They forget that their team hasn't had this exposure.
The result: leaders communicate at a level of abstraction that assumes understanding their team doesn't have. "We're implementing an AI-powered claims triage system" makes perfect sense to the leader. It raises more questions than it answers for the claims handler.
False Consensus Effect
People overestimate the extent to which others share their views. A leader who sees AI as an opportunity unconsciously assumes that reasonable people will see it the same way. When they encounter resistance, they attribute it to the individual ("Sarah's always resistant to change") rather than to a legitimate difference in perspective ("Sarah has valid concerns about her role that I haven't addressed").
Optimism Bias in Adoption Forecasting
Leaders consistently overestimate the speed and smoothness of technology adoption. This is amplified in AI because the technology is genuinely impressive in demos. A compelling demo creates an expectation that adoption will be similarly smooth. It won't be, because the demo doesn't include the messy reality of workflow integration, data quality issues, and the psychological adjustment of changing how you work.
Closing the Gap
Structured Listening
Not surveys. Not town halls. Structured one-on-one or small-group conversations with a clear format:
- What do you understand about the AI initiative?
- What questions do you have that haven't been answered?
- What concerns you most?
- What would make this work well for you?
The key: these conversations must be conducted by someone the participants trust, and the feedback must demonstrably influence decisions. If people share concerns and nothing changes, you've confirmed that speaking up is pointless.
Honest Impact Assessment
Don't dance around job impact. If AI will change roles, say so. If some tasks will be automated, identify which ones. If the net effect is positive (more time for skilled work, less time on admin), explain that with specifics, not platitudes.
People can handle honest information. They can't handle uncertainty. The empathy gap widens when leaders avoid the hard conversations, because the workforce fills the silence with worst-case assumptions.
Early and Genuine Involvement
Involve frontline staff in the design process. Not as a consultation checkbox. As genuine contributors to how the AI system works, what it prioritises, and what the workflow looks like.
When a claims handler helps design the triage interface, two things happen. First, the interface is better, because it reflects real workflow knowledge. Second, the handler has ownership of the system. It's not something that was done to them. It's something they helped build.
Measure the Gap
Add empathy gap measurement to your AI adoption metrics. At the start of a project, assess both leadership perception and workforce experience. Track the gap over time. If it's widening, your communication and involvement strategies aren't working. If it's narrowing, you're on the right track.
4x
higher adoption rates in AI initiatives where the empathy gap was explicitly measured and addressed during the first month
Source: Tuazon, G., and Hatherley-Greene, T., advisory engagement analysis, 2024
Why This Matters More for AI
Every technology change has an empathy gap. But AI amplifies it for specific reasons:
AI threatens identity. For knowledge workers, expertise is identity. An AI that can do part of their job doesn't just threaten their employment. It threatens their sense of professional self-worth. This is a deeper anxiety than "will I lose my job?" and it requires a deeper response.
AI is opaque. Previous technology changes (email, ERP, cloud) were conceptually understandable. AI is not. Most people don't understand how it works, which makes it harder to assess whether it's trustworthy, reliable, or fair. Opacity amplifies anxiety.
AI generates comparisons. "The AI processed 50 claims in the time it takes you to process one." Even when framed positively, these comparisons undermine people's sense of value. Leaders who don't anticipate this reaction create it accidentally.
The Bottom Line
The empathy gap is not a nice-to-know insight. It's the mechanism by which well-intentioned AI projects lose their workforce's trust, engagement, and ultimately their adoption.
Closing it requires leaders to do something uncomfortable: find out what their people actually think, rather than what they hope their people think. The information may be difficult to hear. It will also be the most valuable information in your AI adoption strategy.
Start by asking. Then listen. Then act on what you hear.

