Skip to main content

AI and Te Ao Māori (Building Responsible AI in Aotearoa)

AI must respect indigenous knowledge, data sovereignty, and cultural values. How we think about building AI that serves all of Aotearoa.
28 November 2023·8 min read
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
Isaac Rolfe
Isaac Rolfe
Managing Director
The global conversation about AI ethics focuses on bias, fairness, and transparency. Those matter here too. But in Aotearoa, there's a dimension that most AI frameworks don't address: the relationship between artificial intelligence and indigenous knowledge, values, and sovereignty.

What You Need to Know

  • AI systems trained on global datasets encode global assumptions. When deployed in Aotearoa, they can misrepresent, misclassify, or erase te ao Māori perspectives. This isn't a theoretical risk - it happens whenever an AI system interprets Māori concepts through a Western lens without appropriate grounding.
  • Māori data sovereignty (the right of Māori to control the collection, ownership, and application of data about Māori) is a governance requirement, not a nice-to-have. Te Tiriti o Waitangi obligations apply to AI just as they apply to every other domain of public and private sector operation in Aotearoa.
  • Building AI that respects te ao Māori isn't about adding a "cultural layer" on top of a Western system. It requires thinking differently about what the AI is for, who it serves, and what values it encodes from the ground up.
  • This is an area where Aotearoa can lead. Our bicultural framework, while imperfect, gives us tools and perspectives that most AI-developing nations lack entirely.

Why This Matters Now

Most of the AI systems being deployed in New Zealand enterprises right now were built in San Francisco. They were trained on predominantly English-language, predominantly Western datasets. They encode assumptions about language, culture, knowledge, and values that reflect their origins.
For many enterprise applications, this is fine. An AI that processes insurance claims or summarises financial reports doesn't need cultural grounding. The data is structured, the domain is well-defined, and the cultural context is minimal.
But the application of AI is expanding rapidly into areas where culture, language, and values matter deeply:
  • Health and wellbeing - where models of wellness (like Te Whare Tapa Whā) differ fundamentally from Western biomedical models
  • Education - where learning approaches, knowledge systems, and assessment frameworks have cultural dimensions
  • Legal and policy research - where Treaty obligations, tikanga Māori, and indigenous rights are core to the analysis
  • Public services - where equitable outcomes require understanding diverse communities and their needs
In these domains, an AI system that doesn't understand te ao Māori isn't just incomplete. It's potentially harmful.

Māori Data Sovereignty

The concept of Māori data sovereignty - articulated through the work of Te Mana Raraunga (the Māori Data Sovereignty Network) - asserts that data about Māori should be subject to Māori governance. This isn't about restricting access. It's about ensuring that Māori communities have meaningful input into how their data is collected, stored, used, and interpreted.
For enterprise AI, this creates specific obligations:
Data provenance. When AI systems ingest data that relates to Māori communities, organisations, or knowledge, the provenance of that data matters. Was it collected with appropriate consent? Does the community it came from have visibility into how it's being used? Are there governance mechanisms in place?
Representation. AI systems trained on general datasets will underrepresent or misrepresent Māori perspectives. A health AI trained primarily on Western clinical data will provide Western clinical recommendations, even when the patient's cultural context calls for a different approach. Ensuring appropriate representation requires deliberate curation, not just larger datasets.
Interpretation. Perhaps the hardest challenge. AI models interpret data through the lens of their training. When a model encounters te reo Māori, it may translate rather than interpret. When it encounters tikanga, it may classify rather than understand. The difference between translation and interpretation, between classification and understanding, is where cultural harm occurs.
67%
of Māori communities surveyed expressed concern about how their data is used by government and private organisations
Source: Te Mana Raraunga, Principles of Māori Data Sovereignty, 2018

How We Think About This

We don't have all the answers. Nobody does - this is emerging territory globally, and even more so in the specific context of AI and indigenous knowledge. But we've developed some principles that guide our work.

Start with Whakapapa

Whakapapa - genealogy, but more broadly, the interconnectedness of all things - provides a useful frame for thinking about AI systems. Every AI output has a whakapapa: the data it was trained on, the assumptions encoded in its architecture, the values of the people who built it, the context in which it's deployed.
Understanding that whakapapa helps us identify where cultural misalignment might occur and address it proactively rather than reactively.

Kaitiakitanga Over Ownership

Kaitiakitanga - guardianship, stewardship - reframes the relationship between organisations and data. You're not the owner of data about communities you serve. You're the guardian. That guardianship comes with obligations: to protect, to use responsibly, to return value to the source.
This principle has practical implications for how we design AI data pipelines. Data governance isn't just about security and access control. It's about stewardship - ensuring that data is used in ways that serve the communities it represents.

Manaakitanga in Design

Manaakitanga - care, generosity, respect for others - should be visible in how AI systems interact with users. An AI health coach that understands hauora through a Māori lens doesn't just translate English health advice into te reo Māori. It engages with a fundamentally different model of wellbeing - one that includes wairua (spiritual), hinengaro (mental/emotional), tinana (physical), and whānau (family/social).
We've applied this thinking in our early health AI work, and the difference between "translated" advice and "grounded" advice is immediately apparent to users.

Consult, Don't Assume

The most important principle is the simplest: don't assume you know what's culturally appropriate. Consult with the communities your AI will serve. Not as a checkbox exercise, but as a genuine partnership that shapes the design, development, and governance of the system.
This takes time. It's not compatible with the "move fast" ethos of most AI development. But it's non-negotiable if you're building AI that touches cultural knowledge, community data, or indigenous perspectives.

Where Aotearoa Can Lead

There's an opportunity here that goes beyond risk mitigation. Aotearoa has something most AI-developing nations don't: a constitutional framework for bicultural partnership, an active indigenous data sovereignty movement, and a growing body of work on how technology and indigenous values can coexist.
If we do this well - if we build AI that genuinely respects te ao Māori, that implements meaningful data sovereignty, that demonstrates how indigenous values can improve AI governance - we create something globally significant. Not because other countries will copy our specific approach, but because we'll demonstrate that responsible AI and indigenous rights aren't in tension. They're complementary.
That's the kind of AI leadership worth pursuing.