Enterprise UX has always been a patience game. Users don't choose enterprise tools. They're assigned them. The usual response is to make tools as efficient as possible and hope people adapt. Gerson's work on perspective-taking and empathy in organisations has given me a different frame: what if we designed enterprise AI tools starting from what users actually feel, not just what they need to do?
What You Need to Know
- AI tools ask users to trust an opaque system, which requires designing for emotion, not just efficiency
- Empathetic onboarding acknowledges the adjustment period honestly rather than assuming enthusiasm
- Error states should respect user expertise ("Your domain knowledge should guide the final decision") rather than just flagging technical failure
- The cost of building empathy into the design process is modest. The cost of low adoption and tool abandonment is significant.
The Empathy Deficit in Enterprise Design
Consumer products invest heavily in emotional design. Enterprise products invest in feature completeness. The assumption is that enterprise users are professionals who will use whatever tool they're given, and the design priority should be capability, not experience.
This assumption is wrong, and it's particularly wrong for AI tools, because AI tools ask something that traditional enterprise tools don't: trust in an opaque system.
A spreadsheet does what you tell it. An AI tool makes decisions you can't fully verify. That requires a fundamentally different relationship between user and tool, and that relationship is shaped by how the tool makes people feel.
When people feel understood, they are more willing to engage with uncertainty. An AI tool that acknowledges the user's context creates the psychological conditions for productive interaction. One that ignores these factors creates the conditions for avoidance.
Dr Gerson Tuazon
AI Strategy & Health Innovation
Where Empathy Applies
Onboarding That Acknowledges the Learning Curve
Most enterprise AI onboarding assumes enthusiasm. "Welcome to your new AI assistant! Here's what it can do." This ignores the reality that many users feel anxious, sceptical, or resentful about being asked to change how they work.
Empathetic onboarding acknowledges the adjustment:
- "This tool will change your workflow. Here's how, specifically."
- "You'll probably find the first week slower, not faster. That's normal."
- "Here are the three things this tool does well. Here are the things it doesn't do."
Honesty during onboarding builds the trust that enables productive use later.
Error States That Respect Expertise
When an AI tool makes an error, the user's expertise is what catches it. Error states should acknowledge this:
- "Your judgement here is what matters. The AI suggestion may not fit this context."
- Rather than "Error: output may be incorrect," try "This output has lower confidence. Your domain knowledge should guide the final decision."
The framing matters. "The AI might be wrong" is less useful than "Your expertise is the final check." Both are true. The second respects the user's professional identity.
Workflow Integration That Doesn't Disrupt
Empathetic design means fitting into existing workflows rather than demanding new ones. If a claims processor reviews claims in a specific order using a specific screen layout, the AI tool should enhance that flow, not replace it.
This requires understanding the existing workflow deeply enough to integrate without disruption. It requires observation, conversation, and iteration with real users, not assumptions about what would be more efficient.
Feedback Mechanisms That Feel Safe
Enterprise AI tools need feedback from users to improve. But feedback mechanisms that feel like surveillance or performance monitoring won't get honest input.
Design feedback as a contribution, not a report:
- "Help us improve: was this output useful?"
- "Your input makes the tool better for everyone on the team"
- Feedback should be anonymous by default, with attribution only when the user chooses
The Business Case for Empathy
Empathetic design is not a luxury. In enterprise AI:
- Tools that users trust get used. Tools they don't trust get worked around.
- Adoption rates directly correlate with the quality of the user experience during the first two weeks
- Workaround behaviour (users bypassing the AI to do things manually) is the strongest signal that the design has failed
The cost of building empathy into the design process is modest. The cost of low adoption, workarounds, and eventual tool abandonment is significant.
Enterprise AI UX is not consumer UX. The context is different, the constraints are different, and the emotional landscape is different. But the principle is the same: design for the person, not just the task. Gerson's research confirms what good designers have always known. People engage with tools that respect them and resist tools that don't.

