Skip to main content

Technology Adoption Starts with Trust

Communities don't resist technology because they're behind. They resist because trust hasn't been earned. That's a communications problem, not a tech problem.
12 May 2025·8 min read
Hannah Terangi Wynne
Hannah Terangi Wynne
Strategic Communications Advisory
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
When communities push back on new technology, the default assumption is that they don't understand it. That they need education, training, a better explainer video. This framing is wrong, and it's costing organisations millions in failed adoption programmes. Communities don't resist technology because they're behind. They resist because trust hasn't been earned.

What You Need to Know

  • Technology adoption in Māori and Pacific communities is a trust problem, not a literacy problem. Framing it as education misses the point entirely.
  • Trust is built through relationships, transparency, and demonstrated respect for community governance. It can't be shortcut with better UX or a marketing campaign.
  • AI adoption is amplifying this dynamic. The opacity of AI systems, combined with valid concerns about data sovereignty, makes trust even harder to earn and easier to lose.
  • Organisations that invest in trust-first approaches see higher adoption rates, better outcomes, and lower long-term costs. The evidence is clear.
  • The communications and change management profession needs to stop treating "resistance" as a problem to solve and start treating it as information to learn from.

The Deficit Framing Problem

I hear it in meetings constantly. "We need to improve digital literacy in Māori communities." "Pacific communities are behind on technology adoption." "We need to educate people about the benefits of AI."
This is deficit framing, and it's both inaccurate and counterproductive. It positions communities as lacking something, usually knowledge or capability, and positions the organisation as the generous provider filling that gap.
The reality is different. Māori and Pacific communities have sophisticated reasons for their relationship with technology. Those reasons are grounded in experience, not ignorance.
62%
of Māori respondents cited 'lack of trust in how data will be used' as their primary concern about new digital services
Source: Digital Inclusion Research Group, University of Auckland, 2024
That's not a literacy gap. That's a trust gap. And it's been earned through decades of experience with systems that collected data from communities and used it against them, or at minimum without their informed consent or benefit.

Why Trust Comes First

Tania and I have talked about this from different angles. My experience is in communications and engagement. Hers is in research and values-based leadership. We arrive at the same place.
Trust isn't a precondition you can tick off and move past. It's the medium through which all meaningful adoption happens. When communities trust the process, the people, and the governance behind a technology, adoption follows naturally. Without that trust, no amount of training closes the gap.
Dr Tania Wolfgramm
Chief Research Officer
In my work at the Ministry of Education, I've seen technology rollouts succeed and fail. The pattern is consistent. The projects that invested in relationship-building before deployment, that involved community leaders in governance, that were transparent about data use, those projects achieved adoption. The ones that led with features and training sessions didn't.

The Trust Equation for Technology

Trust in technology adoption has four components, and they all need to be present.
Relational trust. Do the people bringing this technology have an existing relationship with the community? Have they shown up before? Will they still be here in two years? One-off project teams with fixed-term funding don't build relational trust.
Governance trust. Does the community have genuine input into how the technology is used and how their data is governed? Not advisory input. Governance input. The difference is decision-making authority.
Transparency trust. Can the community see how the technology works, what data it collects, where that data goes, and who benefits? For AI systems, this is particularly challenging because the systems themselves are opaque.
Track record trust. Has the organisation delivering the technology demonstrated trustworthy behaviour in the past? Communities have long memories. An organisation that mishandled data five years ago starts from a deficit, regardless of how good the current project is.

AI Makes This Harder

AI adoption intensifies every dimension of the trust problem.
AI systems are less transparent than traditional software. You can show someone a database and explain what data is stored. Try explaining how a large language model uses training data, or how an AI system arrives at a recommendation. The opacity creates a trust barrier that technical documentation doesn't solve.
78%
of indigenous communities globally expressed concern about AI systems using cultural data without appropriate consent
Source: Indigenous AI Working Group, Stanford HAI, 2024
Data sovereignty concerns are amplified. When an AI system ingests community data, that data may influence outputs in ways that are difficult to trace or control. For Māori communities with clear expectations around data governance, this creates a fundamental tension with how most AI systems are built.
And the pace of AI deployment often doesn't match the pace of trust-building. Organisations want to deploy AI quickly to capture efficiency gains. Communities need time to understand, evaluate, and decide on their own terms. These timelines are in tension, and the organisation's timeline usually wins, at the cost of adoption.

What Trust-First Looks Like

Trust-first technology adoption isn't a framework you bolt on. It's an approach that shapes every decision from the start.
Community governance from day one. Not a reference group that meets quarterly. A governance structure where community representatives have real decision-making authority over how the technology is deployed, what data is collected, and how outcomes are measured.
Transparent data practices. Plain-language explanations of what data the technology uses, where it's stored, who can access it, and how it's protected. For AI systems, this includes explaining model limitations, potential biases, and the boundaries of what the system can and can't do.
Relationship investment before technology deployment. Spend time in the community. Understand their priorities. Ask what technology they actually want, not what you think they need. This might mean your project starts three months later. It also means it actually works.
Long-term commitment. Trust erodes when organisations deploy technology and disappear. Ongoing support, responsiveness to community feedback, and visible commitment to the relationship matter as much as the initial engagement.

The Business Case for Trust

I know some readers are thinking: this sounds expensive. It is, upfront. But the alternative is more expensive.
Failed technology adoption means sunk implementation costs, wasted training investment, and the reputational damage that makes the next project even harder to launch. I've seen government programmes spend millions on technology that communities quietly refused to use. The post-mortem always identifies the same root cause: insufficient community engagement.
Trust-first adoption isn't just ethically right. It's the approach that produces working technology in communities that actually use it. That should be enough justification for any business case.