Skip to main content

The UX Challenge Nobody's Talking About

AI-first products have a design problem. Chat interfaces aren't always the answer. And building trust into AI interactions is harder than building the AI itself.
28 June 2023·7 min read
Rainui Teihotua
Rainui Teihotua
Chief Creative Officer
Every AI product I've seen this year has the same interface: a chat box. Some of them are beautiful chat boxes. Some have nice avatars and smooth animations. But they're all, fundamentally, a text input and a response area. And for most enterprise use cases, that's the wrong answer.

What You Need to Know

  • The "chat is the interface" assumption comes from ChatGPT's success as a consumer product. Consumer AI and enterprise AI have fundamentally different interaction requirements.
  • Chat interfaces push cognitive load onto the user. They require the user to know what to ask, how to ask it, and how to evaluate an open-ended response. That's fine for exploration. It's terrible for task completion.
  • The real UX challenge for AI products is trust. Not whether the AI is accurate, but whether users believe it's accurate and are willing to act on its outputs.
  • Designing AI interfaces is a new discipline. We're all figuring it out. But the principles of good enterprise design still apply: reduce friction, build confidence, make the next action obvious.

The Chat Box Trap

I understand why everyone starts with chat. ChatGPT proved that natural language interaction works. It's intuitive, it's familiar, and it's technically simple to build. From a development perspective, a chat interface is the fastest way to put AI in front of a user.
But "fastest to build" and "best for the user" aren't the same thing.
Consider a claims assessor using an AI tool to process an insurance claim. In a chat interface, they'd type something like: "Summarise the key details of this claim and flag any concerns." Then they'd read a multi-paragraph response, mentally extract the relevant information, and manually update their claims system.
In a well-designed task interface, the AI analysis would be presented alongside the source document. Key fields would be pre-filled. Confidence indicators would flag which extractions the AI is certain about and which need human review. Concerns would be listed with links to the relevant policy sections. The assessor reviews, adjusts where needed, and approves - all in one screen.
Same AI. Same accuracy. Dramatically different user experience.

The Trust Problem

Here's the thing nobody in AI product development wants to talk about: users don't trust AI outputs.
And they shouldn't. Not blindly, anyway. AI makes mistakes. It hallucinates. It presents wrong information with the same confidence as right information. Experienced professionals in regulated industries know this instinctively, even if they can't articulate why.
So the UX challenge isn't just "how do we present AI outputs." It's "how do we build enough trust that users actually incorporate AI into their work, while maintaining enough healthy scepticism that they catch errors."
That's a harder design problem than anything I've worked on in 10 years of enterprise UX.
67%
of enterprise workers distrust AI outputs without source attribution
Source: Edelman, Trust Barometer Special Report: Trust and Technology, 2023

Trust Signals That Work

From our early work on AI products, some patterns are emerging:
Show your sources. When the AI cites a document, link to the document. When it extracts a number, highlight where that number came from. Source attribution is the single most effective trust-building pattern we've found.
Communicate uncertainty. Not every AI output is equally confident. Design a visual language that distinguishes "the AI is sure about this" from "the AI thinks this is probably right" from "the AI is guessing." This isn't technically difficult. Most models can produce confidence scores. The design challenge is communicating that clearly without overwhelming the user.
Make correction easy. If a user spots an error, the path to correction should be immediate and obvious. Click the wrong value, type the right one, move on. If correcting AI errors is slow or painful, users will stop checking and either blindly accept (dangerous) or stop using the tool entirely.
Build confidence incrementally. Start AI interactions in a low-stakes mode where the user verifies everything. As they build confidence in the system, progressively give the AI more autonomy. Don't start by asking users to trust a system they've never used.

Beyond the Chat Box

I'm not saying chat interfaces are useless. They're excellent for:
  • Open-ended exploration and research
  • Natural language search across large knowledge bases
  • Conversational guidance through complex processes
  • Situations where the user genuinely doesn't know what they need
But enterprise AI products need a broader design vocabulary. Task interfaces. Decision support dashboards. Inline AI assistance that augments existing workflows. Structured review and approval screens. Multi-modal interfaces that combine conversation with structured input.
The principles I wrote about last month apply here: progressive disclosure, confidence signalling, graceful degradation. But there's a design layer underneath those principles that's specifically about trust.

What We're Learning

We're deep in this challenge right now, building AI products that need to work in regulated industries where the consequences of getting it wrong are real. Every design decision goes through the filter: does this build trust or undermine it?
Some things we've learned so far:
Speed undermines trust. When an AI responds instantly, users assume it didn't "think" about it. A brief processing indicator - even when the response is ready immediately - gives the interaction appropriate weight. This feels counterintuitive for designers who've spent years optimising for speed.
Personality undermines trust. Enterprise users don't want a friendly chatbot. They want a competent tool. Every time the AI says "Great question!" or "I'd be happy to help with that!", it sounds less like a professional system and more like a customer service bot. Tone matters.
Layout communicates authority. A chat response that says "the coverage limit is $500,000" feels different from a structured card that displays "Coverage Limit: $500,000 (Source: Policy Section 4.2, p.12)." Same information. Different confidence level. Design is doing real work here.
This is the UX challenge of our generation. We're designing for a new kind of interaction: human and machine working together on consequential decisions. The designers who figure this out will define how enterprise AI actually works in practice.
I don't have all the answers yet. But I'm pretty sure the answer isn't a chat box.