Skip to main content

AI Governance Beyond the Checklist

67% of leaders have approved AI deployments despite security concerns. Only 38% have comprehensive AI policies. Governance isn't a compliance exercise - it's a design problem.
7 April 2026·9 min read
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
Most organisations are now deploying AI faster than they can govern it. The question is not whether your organisation is using AI. The question is whether your governance framework was designed for it, or whether it was borrowed from a compliance tradition that never anticipated autonomous decision-making systems.

What You Need to Know

  • Governance is a design discipline, not a compliance exercise. The organisations treating AI governance as a checklist are the ones most exposed to risk. Governance must be embedded in how AI systems are designed, deployed, and monitored.
  • The gap between AI deployment and AI governance is widening. Recent research shows 57% of leaders say AI is advancing faster than they can secure it, and only 38% have comprehensive AI policies in place.
  • New Zealand has no standalone AI legislation. Organisations here must self-govern within existing frameworks like the Privacy Act 2020 and public sector accountability principles. This is both a freedom and a responsibility.
  • Values-led governance outperforms rules-led governance. When governance is grounded in organisational values, tikanga, and purpose, it adapts to novel situations. Checklists cannot.
  • Three immediate actions can close the gap: appoint governance ownership, embed human-in-the-loop review for high-impact decisions, and establish audit trails before scaling further.
67%
of leaders have approved AI deployments despite security concerns
Source: Trend Micro, Business Risk Survey, March 2026
38%
of organisations have comprehensive AI policies in place
Source: Trend Micro, Business Risk Survey, March 2026
57%
say AI is advancing faster than they can secure it
Source: Trend Micro, Business Risk Survey, March 2026

The Governance Gap Is Widening

Research published by Trend Micro in March 2026 paints a sobering picture. Two-thirds of business leaders have felt pressured to approve AI deployments despite unresolved security concerns. Only 38% report having comprehensive AI policies. And 57% acknowledge that AI capabilities are advancing faster than their security frameworks can keep pace.
These are not fringe findings. They reflect a systemic pattern: the urgency to deploy AI is consistently outpacing the maturity of the governance structures intended to guide it. When 41% of leaders cite unclear regulation as a barrier to effective governance, the result is not caution. The result is ungoverned deployment.
This is the governance gap, and it is widening. Every quarter that passes without a coherent governance framework is a quarter of accumulated risk: data handling decisions made without policy, model outputs influencing customers without audit trails, and organisational exposure growing invisibly.

Why Checklists Fail

The instinct to respond to this gap with a checklist is understandable. Checklists are tangible, assignable, and completable. But AI governance is not a finite task. It is an ongoing design challenge.
A checklist asks: "Have we done this?" Governance asks: "Are we equipped to handle what we have not yet anticipated?" These are fundamentally different questions. AI systems are probabilistic, not deterministic. They evolve through retraining, fine-tuning, and shifting data distributions. A governance framework that was correct at deployment may be inadequate three months later.
New Zealand's regulatory approach makes this distinction especially important. Unlike the EU, which has introduced the AI Act with prescriptive risk categories and obligations, Aotearoa New Zealand has chosen what commentators describe as a "light-touch" approach. There is no standalone AI Act. Instead, the government relies on existing tech-neutral legislation, primarily the Privacy Act 2020, supplemented by the OECD AI Principles and the Public Service AI Framework.
This means organisations in Aotearoa cannot wait for regulation to tell them what good governance looks like. They must design it themselves. And that design work requires more than a spreadsheet of controls.

Governance as Architecture

If checklists are insufficient, what replaces them? The answer is governance as architecture: a values-led framework that is structural, adaptive, and embedded in how AI systems are built, not bolted on after deployment.
Three architectural principles matter most:
Human-in-the-loop as a design decision. For any AI system that influences decisions affecting people, whether customers, employees, or communities, human review should be designed into the workflow from the beginning. This is not about distrust of technology. It is about accountability. When an AI system recommends a claims decision or flags a patient for triage, a human must be positioned to review, override, and learn from that interaction. Designing this after deployment is expensive and fragile. Designing it from the outset is simply good engineering.
Audit trails as infrastructure. Every AI interaction that informs a decision should be logged with sufficient context to reconstruct the reasoning. This includes the input data, the model version, the confidence score, and the output. Audit trails are not bureaucracy. They are the foundation of explainability, and explainability is the foundation of trust.
Explainability as a cultural commitment. Genuine progress happens when cultural intelligence and technical intelligence work together. In te ao Māori, decisions that affect people carry obligations of transparency, of being able to explain the whakapapa of a decision, where it came from, what informed it, and who is accountable. This principle aligns naturally with responsible AI governance. Explainability is not a technical feature. It is a commitment to the people your systems serve.

The Aotearoa Context

New Zealand's approach to AI governance is deliberately proportionate. The government has signalled alignment with the OECD AI Principles and has published the Public Service AI Framework to guide government agencies, but it has not introduced AI-specific legislation.
The Privacy Act 2020 remains the primary statutory instrument relevant to AI, particularly where personal information is processed. The Act's information privacy principles, covering collection, use, disclosure, and storage, apply regardless of whether a human or an algorithm is making the decision. This is important: the law does not distinguish between human and automated processing. If your AI system processes personal information, the Privacy Act applies in full.
For organisations operating across borders, the regulatory landscape is more complex. The EU AI Act introduces obligations that may apply to New Zealand companies serving European customers. Simpson Grierson has described New Zealand's position as "walking a tightrope" between innovation-friendly settings and the need for public trust. Bell Gully's analysis characterises the approach as "light touch regulation" that places the burden of responsible deployment squarely on organisations themselves.
This means governance frameworks in Aotearoa must be self-sustaining. They cannot rely on regulatory prescription. They must be grounded in organisational values, informed by international standards, and rigorous enough to withstand scrutiny when things go wrong, because eventually, they will.

What to Do

Three concrete steps that any organisation can take now, regardless of size or AI maturity:
1. Appoint a governance owner, not a committee. Governance by committee diffuses accountability. Name one person, ideally at executive level, who owns the AI governance framework. This person is responsible for policy, risk oversight, and escalation. They do not need to be technical. They need to be authoritative and accountable.
2. Embed human review for high-impact decisions. Identify every AI system that influences decisions affecting people, whether staff, customers, or communities, and ensure a human review step is built into the workflow. Document the criteria for when human override is required. This is your most immediate risk reduction measure.
3. Establish audit trails before you scale. If you cannot explain how your AI system reached a particular output today, scaling that system will scale your risk proportionally. Implement logging of inputs, model versions, confidence scores, and outputs for every AI system currently in production. This is the foundation for everything else: compliance, trust, continuous improvement, and the ability to respond when something goes wrong.
Governance is not a barrier to AI adoption. It is the architecture that makes AI adoption sustainable. The organisations that understand this distinction are the ones building AI capabilities that will endure.