Skip to main content

The Pou Marama Framework

Evaluating AI through values, not just metrics. The Pou Marama framework for values-led AI governance, grounded in doctoral research and enterprise practice.
5 June 2025·8 min read
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
For three years, I have been developing a framework that asks a question most AI governance models do not: whose values does this system serve? The Pou Marama framework is not a compliance checklist. It is a methodology for evaluating AI systems through the values of the people they affect, grounded in my doctoral research and tested through enterprise practice.

Why Another Framework

The world does not lack AI governance frameworks. The EU AI Act, the NIST AI Risk Management Framework, ISO 42001, and dozens of industry-specific guidelines all provide structure for governing AI responsibly. They are useful. They are also incomplete.
Most governance frameworks evaluate AI systems against universal principles: fairness, transparency, accountability, safety. These principles are necessary. They are not sufficient. They tell you what to measure without telling you whose perspective to measure from.
Fairness according to whom? Transparency for whose benefit? Accountability to which community? These are not abstract questions. They are the questions that determine whether an AI system serves or harms the people it touches.
The Pou Marama framework addresses this gap by starting from values, not principles. Not universal values imposed from outside, but the specific values of the communities, cultures, and relationships that an AI system affects.

The Framework

Pou Marama translates loosely as "pillars of light" or "guiding beacons." The framework uses four pou (pillars) to evaluate AI systems:

Pou Tikanga: The Values Pillar

What values does this AI system encode, and whose values are they?
Every AI system makes value-laden decisions, even when it appears purely technical. A claims triage system that prioritises speed encodes a value (efficiency). A knowledge retrieval system that surfaces certain documents over others encodes a value (relevance, as defined by whoever designed the ranking).
The Tikanga pillar makes these embedded values explicit. It asks:
  • Whose values are reflected in the system's design?
  • Whose values are missing?
  • Where do the encoded values conflict with the values of affected communities?
  • How are value conflicts resolved, and who has authority in that resolution?
This is not a theoretical exercise. It produces specific design requirements. If kaitiakitanga (guardianship) is a core value for an affected community, the system must include data governance mechanisms that reflect guardianship principles: community oversight, return of insights to the community, restrictions on data use beyond the agreed purpose.

Pou Matauranga: The Knowledge Pillar

What knowledge does this AI system use, and whose knowledge is it?
AI systems are built on data, and data is a form of knowledge. The Matauranga pillar examines the knowledge foundations of an AI system:
  • Whose knowledge is represented in the training data?
  • Whose knowledge is absent?
  • Does the system treat all knowledge traditions with equal respect, or does it privilege certain traditions over others?
  • How is indigenous knowledge protected from extraction without consent?
For systems that process or generate content related to Māori knowledge (te reo, tikanga, whakapapa, local history), this pillar requires explicit engagement with the knowledge holders. Not consultation. Engagement. The difference is authority: consultation asks for input; engagement shares decision-making.

Pou Whanaungatanga: The Relationships Pillar

What relationships does this AI system affect, and how?
AI systems are not isolated technical artefacts. They sit within networks of relationships: between users and organisations, between organisations and communities, between communities and their data. The Whanaungatanga pillar maps these relationships and evaluates how the AI system affects them.
This pillar often surfaces impacts that purely technical evaluations miss. A customer service AI that handles complaints efficiently (high marks on the technical evaluation) but eliminates the human relationship between the organisation and its customers (low marks on the relationships evaluation) is a system that optimises a metric while degrading a value.

Pou Mana: The Authority Pillar

Who has authority over this AI system, and is that authority appropriate?
Authority in AI governance is usually framed as oversight: who reviews the system, who approves changes, who handles incidents. The Mana pillar goes deeper. It asks whether the right people and communities have meaningful authority over the system's behaviour and impact.
For AI systems that affect Māori communities, the Mana pillar evaluates whether those communities have genuine authority over how their data is used, how their knowledge is represented, and how the system's impacts on their communities are assessed and addressed. Not advisory authority. Decision-making authority.

In Practice

The Pou Marama framework adds a structured evaluation phase to AI project delivery. In our work at RIVER, this typically takes 1-2 weeks during the discovery phase and produces:
A values map that documents whose values the system affects and how. This becomes a design input, not a compliance document.
Design constraints derived from the values map. Specific, actionable requirements that the engineering team implements alongside functional requirements.
Governance mechanisms that ensure the values remain embedded as the system evolves. Not a one-time review, but ongoing participation by value holders in the system's governance.
Evaluation criteria that supplement standard performance metrics with values-based measures. The system is evaluated not just on accuracy and speed, but on whether it serves the values it was designed to uphold.
4
evaluation pillars in the Pou Marama framework: Tikanga (values), Matauranga (knowledge), Whanaungatanga (relationships), and Mana (authority)

What This Changes

The Pou Marama framework changes three things about how AI projects are delivered:
Scope. The evaluation includes impacts and relationships that purely technical governance models miss. This sometimes reveals that an AI system should not be built at all, or should be built differently than originally planned. That is a feature, not a limitation.
Participation. The people affected by the AI system participate in its governance, not as consultants but as decision-makers. This requires more time upfront and produces more trust in the outcome.
Accountability. The values are documented, the design constraints are explicit, and the evaluation criteria are measurable. When someone asks "does this system serve the values of the people it affects?" there is a concrete, evidence-based answer.

The Research Foundation

This framework is grounded in my doctoral research on indigenous knowledge governance and AI ethics, conducted over the past three years. It draws on tikanga Māori (Māori customary practice), kaupapa Māori research methodology (research that centres Māori worldviews and priorities), and practical experience applying these principles in enterprise AI delivery.
The academic work is ongoing. The practical application is here now. The two inform each other: enterprise practice surfaces questions that academic inquiry addresses, and academic inquiry provides rigour that enterprise practice demands.

AI governance needs more than principles. It needs methodology. The Pou Marama framework provides that methodology by starting where governance should always start: with the people the system affects and the values they hold. That is not an alternative to technical governance. It is the foundation on which technical governance becomes meaningful.