AI is being deployed to generate communications at a speed and scale that human teams can't match. For mainstream audiences, the output is often adequate. For Māori and Pacific communities, AI-generated communications without cultural context doesn't just miss the mark - it causes real harm. And that harm compounds every time it's deployed.
What You Need to Know
- AI-generated communications trained on mainstream English-language data produces content that reflects mainstream cultural assumptions. When deployed to Māori and Pacific audiences, these assumptions cause miscommunication, distrust, and disengagement.
- The speed advantage of AI communications becomes a liability when cultural errors are produced and distributed faster than human review can catch them.
- Te reo Māori generated by current AI systems ranges from grammatically awkward to genuinely offensive. AI systems lack the cultural knowledge to use te reo appropriately - context, register, and tikanga all matter.
- Cultural competence in AI communications isn't a fine-tuning problem. It requires Māori and Pacific involvement in system design, training data curation, and output governance.
The Problem at Scale
AI-generated content is already widespread in government, health, education, and corporate communications. Organisations use AI to draft public-facing content, generate social media posts, produce internal communications, and create health information materials. The efficiency gains are genuine.
But efficiency without cultural competence is a multiplier of harm.
When a human communicator makes a cultural error in a piece of content, the error affects one document. When an AI system embeds a cultural assumption into its generation model, that assumption propagates across every piece of content it produces. The error is systematic, not incidental.
92%
of AI-generated health communications tested contained culturally inappropriate framing for Māori audiences
Source: University of Otago, AI and Health Communication Research Group, 2025
I've reviewed AI-generated government communications intended for Māori audiences. The te reo was present but incorrect - wrong register, inappropriate formality levels, and in one case, a kupu used in a context that was culturally offensive. The English content defaulted to individual-focused framing for health decisions that Māori whānau make collectively. The tone was institutional when it should have been relational.
Each of these errors would have been caught by a Māori communicator reviewing the content. But the point of AI-generated content is speed. The review step is the bottleneck that AI is supposed to remove. So the errors get published, the community notices, and the trust deficit deepens.
Te Reo and AI
Current AI systems handle te reo Māori poorly, and the reasons are structural.
Te reo Māori is a relatively low-resource language in AI training data. The models that generate te reo have learned from a limited and uneven corpus. The result is output that might be grammatically recognisable but lacks the nuance that native speakers immediately notice.
More fundamentally, te reo Māori is not just a language. It carries tikanga. The appropriate use of te reo depends on context - who is speaking, to whom, about what, in what setting. A formal mihi requires different language from a casual kōrero. Te reo used in a health context carries different weight from te reo used in an educational context. AI systems don't have access to this contextual knowledge, and no amount of training data fully compensates for it.
The Harm of Getting It Wrong
When an organisation publishes AI-generated te reo that's incorrect or contextually inappropriate, the damage extends beyond the specific content.
It signals to Māori communities that the organisation doesn't value te reo enough to have it produced by people who actually speak it. It contributes to the normalisation of incorrect te reo in public spaces. And for communities engaged in te reo revitalisation - where every public use of te reo carries weight - seeing mangled te reo from AI systems is actively discouraging.
67%
of te reo Māori speakers surveyed expressed concern about AI-generated te reo undermining language quality standards
Source: Te Taura Whiri i te Reo Māori, Digital Language Use Survey, 2025
Pacific Languages Face the Same Risk
What's true for te reo Māori is true for Pacific languages in Aotearoa, often more acutely. Samoan, Tongan, Cook Islands Māori, Niuean, and other Pacific languages have even less representation in AI training data. AI-generated content in these languages ranges from inadequate to nonsensical.
Pacific communities in Aotearoa already face language erosion pressures. AI-generated content that uses Pacific languages poorly contributes to that erosion by normalising incorrect usage and reducing the perceived need for native-speaking communicators.
What Cultural Competence Requires
Solving this isn't a technical fine-tuning exercise. It requires structural changes to how AI communications systems are designed and governed.
Māori and Pacific involvement in training data. The data used to train AI systems for communications needs to be curated with input from language communities. This includes decisions about what data is appropriate to use, how it should be weighted, and what cultural constraints apply. Data sovereignty principles must govern this process.
Human review as a feature, not a bottleneck. For any AI-generated content intended for Māori or Pacific audiences, human review by culturally competent communicators must be a required step, not an optional quality check. The speed advantage of AI is meaningless if the output causes harm.
Community governance over AI communications. Māori and Pacific communities should have governance input into how AI systems generate content about or for their communities. This isn't advisory. It's authority over the systems that shape how their communities are communicated with.
Transparency about AI use. Communities deserve to know when communications they receive has been generated by AI. This transparency allows communities to calibrate their trust accordingly and provides accountability for the organisations deploying these systems.
AI can generate a thousand messages in the time it takes me to write one. But if those thousand messages erode trust with the communities they're intended for, the efficiency is an illusion.
Hannah Terangi Wynne
Strategic Communications Advisory
The Path Forward
AI communications tools will continue to improve. Te reo capability will get better. Cultural understanding in AI systems will deepen. But improvement requires intentional investment in the right direction - not just more data, but better governance, community involvement, and respect for the languages and cultures these systems attempt to serve.
The organisations that get this right will be the ones that treat cultural competence as a design requirement, not a post-production review. The ones that get it wrong will discover that the trust they lose with Māori and Pacific communities takes far longer to rebuild than the time they saved using AI.
