We keep seeing the same pattern. An AI system is designed by engineers, approved by executives, and deployed to users who had no input into its design. The engineering is solid. The business case is clear. And the person who has to use it eight hours a day finds it confusing, disruptive, and harder than doing things the old way. This is a design failure, not a user failure.
What You Need to Know
- Most enterprise AI is designed for the buyer's requirements, not the user's workflow
- User-centred AI deployment applies product design principles to the adoption process itself
- The gap between "technically works" and "actually usable" is where most adoption fails
- Involving end users in the design process reduces resistance and increases adoption by 40-60%
42%
of enterprise AI tools are abandoned within 6 months due to poor user experience
Source: Forrester, 2025
3x
higher adoption when end users are involved in AI system design
Source: Nielsen Norman Group, 2024
The Buyer-User Gap
Enterprise AI systems are sold to executives and deployed to teams. The executive cares about ROI, efficiency, and competitive advantage. The team member cares about: "Does this fit my workflow? Is it faster than what I'm doing now? Can I trust it? Will it make my day better or worse?"
These aren't the same questions. And when the AI system is designed to answer the first set without addressing the second, you get technically sound systems with poor adoption.
I think about the entire project, the user experience, the client's vision, the business outcome. Those aren't always the same thing. That's the job. And with AI, that gap between what the buyer wants and what the user needs is wider than with any technology I've seen.
Rainui Teihotua
Chief Creative Officer
What User-Centred AI Deployment Looks Like
Understand the Current Workflow First
Before designing anything, map how people actually work today. Not the documented process. The real one. The workarounds, the shortcuts, the tribal knowledge.
This mapping reveals where AI can genuinely help (reducing friction in existing workflows) versus where it will create friction (adding steps, breaking routines, requiring new skills).
Design the AI Into the Workflow, Not Alongside It
The most common deployment mistake: building an AI tool as a separate application that users switch to for specific tasks. Every context switch is a friction point. Every separate login is a barrier.
The best AI deployments are invisible. The AI is embedded in the tools people already use. The claims processor doesn't open an AI application. Their existing workflow now includes AI-powered pre-classification, and they see the results in the same interface they've always used.
Test With Real Users on Real Tasks
Not a demo. Not a synthetic scenario. Put the AI system in front of the people who'll use it daily, with their actual data, in their actual workflow. Watch them. Note where they hesitate, where they get confused, where they override the AI, where they give up.
If they need a training manual, we've failed. That's been my principle for twenty years of design. It applies to AI systems just as much as it applies to web applications. The interface should make the AI's output clear, actionable, and trustworthy without explanation.
Rainui Teihotua
Chief Creative Officer
Design for Trust, Not Just Accuracy
Trust is a design problem. People trust AI outputs they can understand, verify, and override. The interface should show:
- What the AI decided and a brief indication of why
- How confident the AI is (high confidence vs low confidence signals)
- How to override when the user disagrees
- How to provide feedback so the system improves
These design elements don't just improve usability. They build the trust that drives sustained adoption.
Design for the Transition Period
The first two weeks of using a new AI system are the hardest. The interface should acknowledge this. Progressive disclosure: start with the simplest AI capabilities and introduce complexity gradually. Contextual help: guidance available at the point of need, not in a separate document.
The transition UX should feel supportive, not overwhelming. The system should be easier on day 1 than it is on day 30, because by day 30 the user is ready for more capability.
The Design-Change Management Intersection
User-centred AI deployment sits at the intersection of product design and change management. The design team shapes how the AI system feels to use. The change management team shapes how the organisation supports people through the transition. When these teams work together, the result is AI systems that are both usable and adopted.
When they work separately, you get beautiful interfaces with poor adoption (design without change management) or heavy change programmes for clunky tools (change management without design).
Practical Steps
- Include a designer in the AI deployment team. Not after the system is built. From the start.
- Conduct user research before deployment. Five conversations with end users will reveal more about adoption barriers than any stakeholder analysis.
- Prototype the user experience before building the full system. Low-fidelity prototypes tested with real users prevent expensive redesign later.
- Design the override capability. Users need to feel in control. A system with no override is a system people will find ways to circumvent.
- Iterate after deployment. The first version will be wrong in ways you couldn't predict. Build a feedback loop and be willing to change the interface based on how people actually use it.
AI deployment that ignores the user is deployment that fails. The technology works. The business case makes sense. But if the person using it eight hours a day finds it frustrating, confusing, or threatening, adoption won't happen. Design for them first. The business outcomes follow.

