I've sat in village fono across Samoa and watched decisions being made in ways that most technology governance boards would struggle to understand. Everyone affected speaks. The matai listen. Disagreement is expected. Resolution takes as long as it takes. There's no agenda item limit, no three-minute speaking cap. The process is slow by corporate standards - and it produces better outcomes than most AI ethics committees I've seen.
What You Need to Know
- Pacific governance systems like the fono (village council) are built around collective decision-making, ongoing consent, and accountability to future generations - principles that most AI governance frameworks claim but don't practise
- Consent in Pacific communities is a relationship, not a transaction. It's maintained through ongoing dialogue, not one-time checkboxes
- Data in Pacific epistemology is collective taonga (treasure), not an individual asset to be extracted and monetised
- AI design patterns should draw from these governance models, not just reference them in ethics statements
The Fono and the Board
In a Samoan fono, a decision about village resources - water access, land use, fishing rights - involves the people who will live with the consequences. The matai (chiefs) hold authority, but that authority comes with tautua, an obligation of service. You don't lead without serving first. And the decision isn't final until the community has had its say.
Now think about how most AI governance works. A board of executives, technical leaders, and perhaps a legal representative makes decisions about systems that will affect thousands or millions of people. The communities most impacted - those whose data trains the models, whose jobs change, whose services are mediated by algorithms - are rarely in the room.
The fono model isn't perfect. No governance model is. But it starts from a fundamentally different assumption: that the people affected by a decision have a right to participate in making it.
This isn't a philosophical nicety. It's a design principle that produces different outcomes.
Consent as Ongoing Relationship
The way most technology systems handle consent is transactional. You click a box. You accept terms. The organisation now has permission to use your data in ways described across forty pages of legal language that neither party expects you to read.
In Pacific communities, consent doesn't work that way. When a family shares health information with a community health worker, that sharing happens within a relationship. The worker knows the family. The family trusts the worker. And if that trust is broken - if the information is used in ways the family didn't expect - the relationship is damaged. The consent, in effect, is withdrawn.
This relational model of consent maps onto AI ethics in ways that the checkbox model simply doesn't. When an AI system uses someone's data to train a model, that use should be governed by an ongoing relationship between the data provider and the data user. Not a one-time transaction that the provider has already forgotten.
What would this look like in practice? Regular communication about how data is being used. Meaningful mechanisms for people to understand what their data contributes to. Real options to withdraw, not buried settings three menus deep. It would be slower and more expensive than the current model. It would also be more honest.
Indigenous governance systems have always understood that decisions carry obligations across time. When we make a decision in a Māori or Pacific context, we're accountable to our tūpuna and to generations not yet born. AI governance that only considers current stakeholders is, by indigenous standards, incomplete.
Dr Tania Wolfgramm
Chief Research Officer
Data as Collective Taonga
Western data frameworks treat data as an individual asset. You generate it, you own it (in theory), you consent to its use. The entire structure of privacy law is built around the individual.
But in Pacific communities, much of the most important data is collective. Health patterns across a family. Environmental knowledge held by a village. Cultural practices passed through generations. This data belongs to the collective, and decisions about its use are collective decisions.
The concept of taonga - treasure, something precious held in trust - captures this well. Data about a community is taonga. It has value. It carries responsibility. And it can't be separated from the people it describes without losing something essential.
When an AI system trains on health data from Pacific communities, it's not just processing individual records. It's drawing on collective knowledge, collective experience, collective vulnerability. The governance of that data needs to reflect its collective nature.
This has practical implications for AI development. Community-level consent mechanisms. Governance structures that include community representatives with real authority, not advisory roles. Benefit-sharing arrangements that return value to the communities whose data created it. Data sovereignty isn't abstract policy here, it's the baseline.
Accountability to Future Generations
Most AI governance frameworks are accountable to current stakeholders: shareholders, regulators, users. Pacific governance adds another dimension - accountability to those who come after.
In Samoan culture, the concept of tautua (service) extends across generations. A matai's decisions are judged not just by their immediate effects but by their long-term consequences for the 'āiga (extended family) and the nu'u (village). A good decision protects future options. A bad decision closes them off.
Applied to AI, this means asking questions that current governance frameworks often skip. How will this system affect communities in ten years? Twenty? Are we creating dependencies that future generations can't easily exit? Are we making decisions about data use that future community members would disagree with but can't reverse?
These aren't easy questions. They don't have clean answers. But asking them changes the design process. It slows things down in ways that matter.
From Acknowledgement to Design
Here's what frustrates me about most AI ethics work. Pacific values, indigenous knowledge systems, and community governance models get mentioned in preambles, acknowledged in principles documents, and then quietly set aside when the actual system design begins. The engineering team builds what it was going to build anyway, with a nod to cultural sensitivity in the documentation.
The shift I'm arguing for is structural. Don't just acknowledge that Pacific governance models exist. Use them as design inputs.
Build consent mechanisms that are relational, not transactional. Design data governance that recognises collective ownership. Create accountability structures that consider intergenerational impact. Include affected communities in decision-making with real authority, not token representation.
If you're exploring how to embed these principles into your own AI work, we'd welcome that conversation.
This won't produce AI systems faster. It will produce AI systems that work better for the communities they're meant to serve. And in my experience, systems built with communities tend to be more resilient, more trusted, and more effective than systems built for them.
The fono has been producing governance outcomes for centuries. The AI ethics board has been around for about five years. Maybe it's time to learn from the longer track record.
