Skip to main content

Designing for AI Trust

Trust is not just transparency. It is design, evidence, cultural competence, and time. A joint framework for building AI systems people actually trust.
16 December 2025·9 min read
Rainui Teihotua
Rainui Teihotua
Chief Creative Officer
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
The AI industry's answer to the trust problem is transparency: show users how the AI works and they will trust it. Rainui and I believe this is incomplete. Transparency is one component of trust, and not the most important one. Real trust is built through design, evidence, cultural competence, and time. A framework for organisations that want to build AI systems people genuinely trust, not just AI systems with transparency labels.

What You Need to Know

  • Transparency alone does not build trust. Showing users how an AI works is necessary but insufficient. Users who do not understand the explanation may trust less, not more. Transparency must be designed for the audience, not for compliance.
  • Trust is built through evidence, not promises. Users trust AI systems that demonstrably work well in their context, not systems that promise to work well. Evidence-based trust requires continuous evaluation and honest communication about performance.
  • Cultural context shapes trust expectations. What builds trust in one cultural context may not work in another. Trust frameworks must account for diverse expectations about authority, evidence, relationships, and accountability.
  • Trust takes time and accumulates through consistent experience. No single feature or disclosure creates trust. Trust builds through repeated positive interactions, honest communication about limitations, and graceful handling of failures.

The Four Dimensions of AI Trust

1. Design Trust

Trust starts with the interface. Before a user evaluates whether the AI's output is accurate, they evaluate whether the system feels trustworthy. This is a design problem.
Visual confidence signals. How outputs are presented communicates confidence. A stark, unqualified AI response feels overconfident. A response with source citations, confidence indicators, and clear scope feels considered. The design communicates the system's relationship with certainty.
Progressive disclosure. Not every user needs the same level of detail. A busy executive needs a summary with a confidence signal. An analyst needs the full reasoning chain with source documents. Design trust through progressive disclosure that serves each user's need for evidence.
The most trustworthy AI interface I have designed is also the most restrained. Those are very different design goals.
Rainui Teihotua
Chief Creative Officer
Graceful failure. How a system fails communicates more about trustworthiness than how it succeeds. An AI that says "I don't have enough information to answer this confidently" is more trustworthy than one that always produces an answer regardless of confidence. Design the failure states as carefully as the success states.
Consistency. Trust erodes when the interface behaves unpredictably. Consistent layout, consistent interaction patterns, and consistent response formatting build familiarity. Familiarity builds trust.

2. Evidence Trust

Users trust what they can verify. Evidence trust means giving users the tools and information to evaluate AI outputs against their own knowledge and judgement.
Source attribution. Every AI output should be traceable to its sources. Not just a list of references, but specific, clickable citations that let the user verify the AI's reasoning. In our experience, users rarely check every citation. But knowing they can check builds trust in a way that unsourced outputs cannot.
Performance transparency. Share honest performance metrics with users. Not marketing claims, but actual evaluation results. "This system correctly classifies 93% of standard cases and 78% of edge cases. Here is what the edge cases look like." Honest performance data builds trust more effectively than vague promises of accuracy.
Comparison baselines. Help users understand AI performance in context. "The AI agreed with the human expert 91% of the time" is more meaningful than "the AI is 91% accurate." People trust relative performance more than absolute numbers.
Error analysis. Share what the AI gets wrong. Publish anonymised examples of errors, explain why they occurred, and describe what was done to address them. This feels counterintuitive, but acknowledging limitations builds trust far more effectively than claiming none exist.

3. Cultural Trust

Tania's contribution to this framework centres on the cultural dimension of trust, which the AI industry almost entirely overlooks.
Trust expectations vary by culture. In high-trust cultures (like many Pacific communities), trust is relationship-based. It develops through personal connections, shared values, and demonstrated commitment over time. An AI system cannot shortcut this by being technically excellent. It must demonstrate cultural competence and alignment with community values.
In institutional-trust cultures (common in Western enterprise), trust is system-based. Credentials, certifications, and documented processes build trust. An AI system builds trust through compliance documentation, security certifications, and formal evaluation reports.
Authority and expertise shape trust. Who endorses the AI matters. In some contexts, an endorsement from a respected elder or community leader builds more trust than any technical certification. In others, academic research and peer review carry more weight. The trust-building strategy must align with the community's trust architecture.
Language and representation matter. An AI system that communicates in the user's language, reflects their cultural context, and avoids imposing unfamiliar frameworks is inherently more trustworthy. This goes beyond translation. It requires cultural competence in how information is framed, how uncertainty is communicated, and how authority is expressed.
Trust is not a feature you add. An AI trust framework that ignores culture is a framework that only works for the culture that designed it.
Dr Tania Wolfgramm
Chief Research Officer

4. Temporal Trust

Trust accumulates over time through consistent experience. No single interaction creates trust. No single feature builds it. Trust is the compound result of repeated positive interactions.
First impression. The first interaction sets expectations. Design the onboarding experience to be honest about what the AI can and cannot do. Set expectations slightly below capability so the system consistently exceeds them.
Consistency. Deliver consistent quality over time. A system that is excellent on Monday and mediocre on Thursday destroys trust faster than a system that is consistently good.
Failure recovery. When the system fails (and it will), recover gracefully. Acknowledge the failure. Explain what happened. Fix it visibly. Users who see a system recover well from failure often trust it more than users who never see it fail.
Evolution. Communicate improvements. When the system gets better, tell users. "Based on your feedback, we improved X" builds trust because it demonstrates that the organisation is listening and investing.

A Practical Framework

For organisations building AI systems that need to earn trust:
Phase 1: Design foundations. Build trust signals into the interface from day one. Source attribution, confidence indicators, graceful failure, and consistent design. These are table stakes.
Phase 2: Evidence infrastructure. Build evaluation capability that produces honest, shareable performance data. Publish it. Update it regularly. Do not hide the weaknesses.
Phase 3: Cultural adaptation. Understand the cultural trust expectations of your specific user communities. Adapt the trust-building approach accordingly. What works for enterprise executives may not work for community health workers.
Phase 4: Temporal commitment. Plan for trust-building as a long-term programme, not a launch feature. Set a cadence for performance communication, failure acknowledgement, and improvement updates.

The Trust Test

We use a simple test for AI trust readiness: "Would we trust this system with our own decisions?"
Not hypothetically. Actually. Would you use this AI to inform a real decision in your own work? If the answer is no, your users will not trust it either, regardless of how many transparency features you add.
Trust is earned through design, evidence, cultural competence, and time. There are no shortcuts. But the AI systems that earn genuine trust become the AI systems that organisations depend on. And that is the only kind worth building.