I hold the matai title of So'oalo, from Samauga in Savai'i. A matai title isn't a job. It's an obligation. You're accountable - personally, by name - for the wellbeing of your aiga (family) and your village. When something goes wrong, people don't ask "what went wrong with the system?" They ask "who is the matai responsible?" That question, and the weight behind it, is something AI governance hasn't figured out yet.
What You Need to Know
- The matai system has governed Samoan communities for centuries with clear personal accountability, collective decision-making, and authority that comes with obligation.
- Modern AI governance frameworks diffuse accountability across committees, policies, and technical layers until no single person is responsible for outcomes.
- Pacific governance models offer a tested alternative - not as a template to copy, but as a source of principles that Western frameworks are still searching for.
- The gap between governance-as-document and governance-as-practice is where most AI governance fails. Pacific systems have always been governance-as-practice. See also: the data sovereignty imperative.
How the Matai System Works
For people unfamiliar with Samoan governance, a brief explanation.
Every Samoan village is governed by a fono - a council of matai (chiefs). Matai titles are held by individuals but belong to the aiga. You don't choose to become a matai the way you apply for a board seat. You're selected by your family based on your character, your service, and your ability to carry the responsibility.
Once you hold a matai title, you're accountable for your family's conduct, their welfare, their disputes, and their representation in the village fono. If someone in your aiga causes a problem, the village doesn't go to that individual first. They come to the matai.
This creates something that modern governance frameworks talk about endlessly but rarely achieve: a direct line from authority to accountability.
The fono makes decisions collectively. But within that collective process, every matai knows exactly what they're responsible for. There's no ambiguity. No diffusion. No committee you can hide behind.
How AI Governance Works (In Theory)
Most AI governance frameworks published in the last few years follow a similar pattern. They establish principles (fairness, transparency, accountability), create oversight committees, define review processes, and produce documentation.
These frameworks aren't bad. The EU AI Act, New Zealand's Algorithm Charter, various industry standards, they represent genuine effort to govern a technology that's moving faster than regulation can follow.
But there's a structural problem.
When an AI system makes a decision that harms someone, a credit application declined, a health screening missed, a benefit claim flagged incorrectly, who is personally accountable?
The data scientist who trained the model? The product manager who approved deployment? The executive who signed off on the use case? The governance board that reviewed the framework?
In practice, accountability gets distributed across so many roles and processes that it effectively disappears. Everyone is responsible, which means nobody is.
What the Matai System Gets Right
I'm not suggesting we transplant Samoan village governance into a technology company. That would be absurd, and it would misunderstand both systems.
But there are principles in the matai system that AI governance frameworks would benefit from studying.
Accountability is personal and named. A matai can't say "the system failed." They are the system. When a decision produces a bad outcome, everyone knows who was responsible. This doesn't mean the matai made the decision alone - the fono is collective. But the accountability is specific.
What if AI governance required a named individual, not a committee, to be accountable for each high-risk AI deployment? Not legally liable in a narrow sense, but publicly and personally responsible for outcomes?
Authority comes with obligation. In the matai system, you can't have authority without obligation. The more authority you hold, the greater your obligation to your community. The title is a burden, not a reward.
In AI governance, authority over deployment decisions often sits with people whose primary obligation is to shareholders or delivery timelines. The obligation to the people affected by the system is secondary, abstract, mediated through policy documents.
Decisions are collective, but not anonymous. The fono deliberates. Matai argue, disagree, and negotiate. But when a decision is reached, it's reached by known people who will live with the consequences in their own community.
AI governance boards often deliberate behind closed doors. Their members may not be publicly known. They don't live in the communities affected by their decisions. The distance between decision-maker and decision-impact is enormous. This is the same empathy gap we see across enterprise AI.
When we talk about indigenous governance models for AI, we're not being romantic about tradition. We're pointing out that these systems solved problems, accountability, collective decision-making, long-term thinking, that Western governance frameworks are still struggling with. Te ao Māori and Pacific governance systems have centuries of iteration behind them. AI governance frameworks have about three years.
Dr Tania Wolfgramm
Chief Research Officer
The Practice Gap
The biggest lesson from the matai system isn't about structure. It's about practice.
The matai system works because it's lived daily. A matai doesn't pull out a governance framework document when a decision needs to be made. The governance is embedded in relationships, expectations, and daily conduct. It's not a document. It's a way of operating.
Most AI governance exists as documents. Policies, frameworks, guidelines, checklists. They sit in SharePoint folders. They get reviewed annually. They influence behaviour only when someone remembers to check them, which is usually after something has already gone wrong.
The gap between governance-as-document and governance-as-practice is where most AI governance fails. And closing that gap requires something harder than writing better policies. It requires building cultures where accountability is personal, where authority carries obligation, and where the people making decisions are connected to the people affected by them.
What This Means for AI Teams
I don't have a governance framework to offer. I have questions.
When your AI system makes a mistake, who do the affected people call? Not which department. Which person?
Does the person making deployment decisions understand the community the system serves? Have they met them? Do they know their names?
When your governance board approves a use case, do the board members carry personal responsibility for the outcome? Or does responsibility dissolve into the committee structure?
These aren't comfortable questions. The matai system isn't comfortable either. Carrying a title means carrying weight. It means being the person who gets woken up at 2am when there's a problem. It means your name is attached to outcomes, good and bad.
AI governance that works will need some version of that weight. Not the specific cultural form - that belongs to Samoa - but the principle underneath it: that governing powerful systems requires personal, named, felt accountability.
The machines are making bigger decisions every year. Someone needs to be the matai. If you're thinking about how to govern AI in your organisation, our approach starts with exactly these questions.
