Skip to main content

Values-Led AI: What It Actually Means

Values-led AI is not a marketing claim. It is a design methodology grounded in the Pou Marama framework. What that looks like in practice.
15 March 2025·7 min read
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
Every AI company claims to be "responsible" or "ethical." Most of them mean they have a principles document somewhere that nobody reads. Values-led AI is different. It is a design methodology that embeds cultural, ethical, and relational values into the architecture of AI systems from the start, not as an afterthought.

The Gap Between Principles and Practice

The enterprise AI industry has no shortage of principles. Fairness. Transparency. Accountability. Beneficence. Every major technology company has published its AI principles. Most frameworks read well. Few translate into engineering decisions.
The gap is not cynicism. Most organisations genuinely want to build AI responsibly. The gap is methodology. Knowing that AI should be fair does not tell an engineering team how to make it fair. Knowing that AI should be transparent does not tell a product team what transparency looks like in a claims processing interface.
Principles without methodology produce two outcomes: either the principles are ignored because nobody knows how to implement them, or they are implemented inconsistently because each team interprets them differently.
78%
of organisations with published AI ethics principles report difficulty translating them into engineering practice
Source: Stanford HAI, AI Index Report 2025

What Values-Led Means

Values-led AI starts from a different premise. Instead of asking "what principles should govern this AI system?" it asks "whose values does this AI system serve, and how do we make those values visible in the design?"
This distinction matters because values are relational, not universal. The values that matter to an iwi managing taonga are different from the values that matter to a bank managing mortgage applications. Both are legitimate. Both require different design responses.
The Pou Marama framework, which I have been developing through my doctoral research and our work at RIVER, provides a structured approach to this:
Identify the value holders. Who is affected by this AI system? Not just the users, but the communities, cultures, and relationships that the system touches. This is stakeholder analysis, but deeper. It includes cultural obligations, not just business requirements.
Surface the values. What do the value holders care about? For Māori communities, this might include whakapapa (genealogy and relationships), kaitiakitanga (guardianship), and mana (authority and prestige). For a professional services firm, it might include professional judgement, client confidentiality, and duty of care. The values are different. The methodology for surfacing them is the same.
Embed the values in design decisions. This is the critical step. Each value translates into specific design constraints. Kaitiakitanga translates into data governance requirements: who can access the data, how long it is retained, who has authority over its use. Professional judgement translates into interface design: the AI assists but does not replace the professional's decision.
Make the values visible. Users and stakeholders should be able to see how values are reflected in the system's behaviour. This is transparency, but specific. Not "we use AI responsibly" but "this system does not make decisions about your claim; it provides your assessor with structured information to support their decision."

Why This Is Not Just Ethics

Ethics frameworks tend to be defensive: how do we avoid causing harm? Values-led design is generative: how do we create systems that actively serve the values of the people they affect?
The difference shows up in practice. An ethics review might flag that a claims triage system could exhibit bias against certain demographic groups and recommend bias testing. Values-led design asks a different question: what does fair claims handling mean to the communities this insurer serves, and how does the AI system support that meaning of fairness?
The ethics review produces a compliance checklist. The values-led approach produces a design specification. Both are useful. But only one changes how the system actually works.

In Practice

Values-led AI design adds two to three weeks to the discovery phase of an AI project. That investment pays back in three ways:
Stakeholder trust. When communities see their values reflected in the system's design, not just its documentation, trust builds faster. For organisations working with Māori communities, iwi, or Pacific communities, this is the difference between adoption and rejection.
Regulatory alignment. New Zealand's regulatory direction, including the emerging AI governance frameworks and the Privacy Act's information privacy principles, increasingly expects organisations to demonstrate that AI systems serve the interests of affected communities. Values-led design produces the evidence that regulators want to see.
Better products. AI systems designed around clear values make better decisions, not because the models are smarter, but because the design constraints are sharper. A system designed with "preserve professional judgement" as a core value produces a fundamentally different (and more useful) interface than one designed with "maximise automation."

The Work Ahead

Values-led AI is early in its development. The Pou Marama framework has been tested across a small number of engagements, and the results are promising but not yet at scale. The methodology needs refinement, particularly in how it handles conflicting values between different stakeholder groups.
But the direction is clear. As AI systems become more embedded in decisions that affect people's lives, livelihoods, and communities, the demand for values-led design will grow. The organisations that develop this capability now will have a structural advantage when the market catches up.

Values-led AI is not about being nice. It is about being precise. Precise about whose values the system serves, precise about how those values translate into design decisions, and precise about how users can see those values in action. That precision is what separates a marketing claim from a design methodology.