Gerson and I approach AI trust from different directions. I design interfaces. He studies the psychology of trust formation. What we've found working together is that most AI interfaces violate basic trust principles, not because the designers are unskilled, but because the principles aren't well understood. Trust in AI follows patterns that psychology research has documented for decades. Enterprise AI interfaces mostly ignore them.
What You Need to Know
- Trust in AI interfaces is not a single dimension. It has at least three components: competence trust (will it give accurate answers?), integrity trust (will it be honest about uncertainty?), and benevolence trust (is it acting in my interest?)
- Most AI interfaces optimise only for competence trust, presenting outputs confidently. This undermines integrity trust when users discover errors.
- Calibrated confidence, showing how certain the AI is about its output, increases overall trust more than projecting certainty
- Trust recovery after an AI error follows different patterns than trust recovery after a human error. AI gets fewer second chances.
The Three Trust Components
Competence Trust
Users need to believe the AI system can do what it claims. This is the component most AI interfaces focus on: showing impressive outputs, demonstrating capability, highlighting accuracy metrics.
But competence trust is fragile. A single confident error can damage it more than ten correct outputs can build it. This is because users judge AI competence differently than human competence. A human who makes one mistake in ten is seen as generally competent. An AI that makes one mistake in ten is seen as unreliable.
Users who trust the AI's competence but not its honesty develop workaround behaviours that undermine the system's value.
Dr Gerson Tuazon
AI Strategy & Health Innovation
Integrity Trust
Users need to believe the AI system is honest about what it knows and doesn't know. This means showing uncertainty, flagging low-confidence outputs, and being transparent about limitations.
Most AI interfaces fail here because they present all outputs with equal confidence. A claim the model is 95% confident about looks identical to a claim it is 60% confident about. Users learn this through experience, usually by discovering errors in confidently-presented outputs. This damages integrity trust retroactively: "If it was wrong about that and didn't tell me, what else is it wrong about?"
Benevolence Trust
Users need to believe the AI system is acting in their interest. In enterprise contexts, this connects to questions about surveillance, performance monitoring, and job security. An AI tool that helps workers is trusted differently than an AI tool that monitors workers, even if the interface is identical.
Benevolence trust is largely determined by context and communication, not by interface design. But interface design can undermine it: audit trails that feel like surveillance, output quality metrics that feel like performance evaluation, and automation that feels like replacement.
Designing for Trust
Show Calibrated Confidence
Instead of presenting all outputs equally, indicate the system's confidence level. This can be as simple as a colour coding (high confidence, medium confidence, low confidence) or as nuanced as a confidence percentage with an explanation of what drives it.
Calibrated confidence accomplishes two things: it helps users make better decisions about which outputs to trust, and it builds integrity trust by demonstrating that the system knows what it doesn't know.
Design for Error Recovery
When the AI makes an error, the interface should make correction easy and the error's scope clear. "This output was wrong. Here's what it should have been. Here's how to correct it. Here are the related outputs that may also need review."
Error recovery design is more important for long-term trust than error prevention design, because errors will happen and the user's experience of recovering from them shapes their ongoing relationship with the system.
Make the AI's Reasoning Visible
Users trust outputs more when they can see how the AI arrived at them. This doesn't require full explainability (which is technically challenging). It requires sufficient transparency: what data was considered, what factors influenced the output, what alternatives were considered.
Visible reasoning transforms the user's relationship with the AI from "accepting or rejecting a black box output" to "evaluating a reasoned recommendation." The second relationship is healthier and more productive.
Separate Help From Surveillance
Enterprise AI interfaces must clearly distinguish between features that help the user and features that report on the user. Activity logging for system improvement should be transparent. Performance metrics should be user-controlled. The line between assistance and monitoring should be explicit.
Trust in AI is not a feature. It is the foundation that determines whether users engage with the system genuinely or develop the minimum-viable-compliance behaviours that make AI adoption statistics look better than reality. Designing for trust requires understanding the psychology, not just the pixels.

