The global AI ethics conversation has settled into a comfortable groove: bias, fairness, transparency, accountability. These matter deeply. But they're a Western liberal framework applied to a technology that operates across every culture, every knowledge system, and every community. The conversation is incomplete, and the gaps matter more than most people realise.
The Mainstream Framework (and Its Limits)
The dominant AI ethics frameworks - from the EU, from OECD, from IEEE - converge on similar principles: fairness, transparency, accountability, privacy, safety. Good principles. Necessary principles. But they share an unexamined assumption: that the primary ethical challenge of AI is how it treats individuals within an existing system.
What they don't adequately address:
Cultural Erasure Through Data
AI models learn from data. That data overwhelmingly reflects English-language, Western, industrialised perspectives. When these models are deployed globally, they don't just process information - they impose a worldview.
A language model that struggles with te reo Māori isn't just a technical limitation. It's a signal about whose language matters in the AI era. A knowledge system that can't represent whakapapa-based relationships isn't just incomplete. It's structurally excluding a way of understanding the world.
This isn't a bias problem in the traditional sense. The model isn't biased against Māori. It simply doesn't know Māori knowledge exists. Absence is its own form of harm.
The deeper question isn't whether the model treats different groups unfairly - it's whose knowledge is in the model at all. The absence of indigenous knowledge systems from AI training data isn't bias; it's erasure.
Dr Tania Wolfgramm
Chief Research Officer
Data Sovereignty as an Ethical Imperative
The mainstream AI ethics conversation treats data as a resource to be governed. Privacy frameworks protect individual data rights. Security frameworks protect data from unauthorised access. But they don't address the question of collective data sovereignty.
For Māori, data is taonga. Te Mana Raraunga articulates principles of Māori data sovereignty: rangatiratanga (authority), whakapapa (relationships), whanaungatanga (obligations), and kaitiakitanga (guardianship). These aren't abstract principles. They're practical governance requirements that determine how data about Māori communities should be collected, stored, processed, and shared.
Most AI ethics frameworks don't have a category for this. They can handle "individual consent" and "data protection." They can't handle "collective sovereignty over data that represents a community's knowledge, identity, and future."
This isn't a uniquely Māori concern. Indigenous communities globally, Pacific nations, First Nations in Australia and Canada - all face the same gap between individual privacy frameworks and collective data sovereignty needs.
Epistemic Justice
Here's the concept that's almost entirely absent from the AI ethics mainstream: epistemic justice. Whose knowledge counts? Whose ways of knowing are valid? When AI systems are built on Western scientific epistemology and deployed across diverse knowledge traditions, they don't just process information differently. They implicitly rank knowledge systems.
A medical AI trained on Western clinical evidence may produce recommendations that conflict with traditional healing knowledge. The ethical question isn't just "is the recommendation accurate?" It's "who decides what accuracy means, and whose evidence counts?"
This isn't an argument against evidence-based medicine. It's an argument for epistemic humility - recognising that AI systems embed epistemological assumptions, and those assumptions should be transparent, not invisible.
What This Means for NZ
Aotearoa is one of the few places in the world where these questions are not theoretical. Te Tiriti o Waitangi creates genuine governance obligations around Māori data, Māori knowledge, and Māori self-determination. These obligations apply to AI just as they apply to every other domain.
Practically, this means:
AI systems processing Māori data need Māori governance. Not just a Māori representative on an advisory board. Genuine governance input into what data is used, how it's processed, and what decisions are made with it.
AI deployment in sectors affecting Māori communities requires engagement. Health AI, education AI, social services AI, environmental AI - all touch Māori communities and Māori interests. Deployment without engagement isn't just poor practice. It's a potential Treaty breach.
AI knowledge systems should include, not exclude, matauranga Māori. Where AI systems are used in domains where matauranga Māori is relevant (environmental management, health, education), the system should be capable of incorporating that knowledge, not just Western scientific knowledge.
NZ enterprises have an opportunity to lead. Our governance context forces us to grapple with questions that global AI ethics frameworks haven't addressed. That's not a burden. It's a competitive advantage. Enterprises that build culturally intelligent AI governance are building better governance, full stop.
The Missing Dimensions Globally
The gaps I've described in the NZ context have global parallels:
Language and cultural representation. AI models perform dramatically better in English than in most other languages. Performance degrades further for languages with smaller digital footprints. This isn't just a technical gap - it determines who can participate in the AI economy.
Power concentration. AI development is concentrated in a handful of companies, predominantly in the US and China. The ethical frameworks governing AI are being written by the same communities that build it. Communities most affected by AI deployment have the least input into its governance.
Environmental cost. The carbon footprint of training and running large AI models is substantial. The environmental cost is borne globally. The economic benefit accrues to specific regions and companies. This distributional injustice barely features in mainstream AI ethics.
Labour extraction. AI training relies on human data labelling, often performed by workers in low-income countries for minimal pay, under poor conditions. The "AI ethics" label rarely extends to the ethics of how AI systems are built.
67%
of AI ethics guidelines globally are produced by organisations in North America or Europe
Source: Algorithm Watch, AI Ethics Guidelines Global Inventory, 2023
What Better Looks Like
I'm not arguing that the mainstream AI ethics framework is wrong. I'm arguing it's insufficient. Better looks like:
Expanding the stakeholder set. AI ethics governance should include representatives of communities affected by AI, not just the companies building and deploying it. In NZ, this means Māori governance. Globally, it means indigenous and underrepresented communities.
Moving beyond individual rights to collective rights. Data sovereignty, cultural preservation, and community self-determination need to sit alongside individual privacy and fairness in AI governance frameworks.
Acknowledging epistemological assumptions. AI systems embed assumptions about what knowledge is valid. Making those assumptions explicit - and creating space for alternative knowledge systems - is an ethical requirement, not a nice-to-have.
Distributing power. The communities most affected by AI should have the most input into its governance. Currently, the opposite is true. Reversing this isn't just ethically correct. It produces better, more robust, more trustworthy AI systems.
Actionable Takeaways
- Audit your AI systems for cultural assumptions. What knowledge is represented? What's missing? Whose perspectives shaped the training data?
- Engage affected communities before deployment. Not after. Not as a checkbox. As genuine co-design participants.
- Build governance frameworks that include collective rights. Individual privacy is necessary but insufficient. Community data sovereignty matters.
- Support linguistic diversity in AI. If your AI system serves communities that speak languages other than English, invest in those languages. Don't accept English-only as a default.
- Read Te Mana Raraunga principles. Even if you're not in NZ. The framework is the most developed articulation of indigenous data sovereignty and it has lessons for every AI governance effort.
