New Zealand doesn't have AI-specific legislation yet. That's not a reason to delay governance. It's a reason to get ahead of it. The organisations building governance frameworks now will be ready when regulation arrives. The ones waiting will be scrambling.
The Regulatory Landscape (October 2024)
New Zealand's approach to AI governance is evolving across several parallel tracks:
Privacy Act 2020 already applies to AI systems that process personal information. The Office of the Privacy Commissioner has been increasingly active in guidance around automated decision-making, particularly for government agencies.
The Algorithm Charter (signed by government agencies) commits signatories to transparency about algorithmic decision-making. While voluntary, it signals the direction of travel.
Australia's AI Ethics Framework and the EU AI Act provide models that NZ regulators are actively studying. The Trans-Tasman alignment means Australian developments directly influence NZ policy.
34%
of NZ enterprises have a formal AI governance framework
Source: NZTech, AI Readiness Report 2024
That means two-thirds of New Zealand enterprises using AI have no formal governance around it. This is a risk - not just regulatory, but reputational and operational.
What Boards Need to Understand
AI Governance Is Not IT Governance
The instinct is to slot AI governance into existing IT governance frameworks. This doesn't work. AI introduces risks that IT governance wasn't designed for:
- Output unpredictability: Traditional software does what it's programmed to do. AI systems can produce unexpected outputs, including biased or incorrect ones.
- Data dependency: AI systems are only as good as their training data. Biased data produces biased decisions, even when the algorithm is technically correct.
- Explainability gaps: Many AI systems can't explain why they made a specific decision. This is a problem for regulated industries and for public trust.
AI governance isn't a compliance checkbox. Boards that delegate this entirely to IT are abdicating a strategic responsibility.
Dr Tania Wolfgramm
Chief Research Officer
The Three Pillars of AI Governance
Based on our work with NZ enterprises and the emerging international frameworks, AI governance rests on three pillars:
1. Accountability. Clear ownership of AI decisions. Someone is responsible for what the AI does, how it's monitored, and what happens when it's wrong.
2. Transparency. The organisation can explain what AI systems are in use, what decisions they influence, and how those decisions are made. To staff, to customers, and to regulators.
3. Fairness. AI systems are regularly tested for bias, and there are processes to correct bias when found. This is particularly important in hiring, lending, insurance, and public services.
The Board's Role
The board doesn't need to understand transformer architectures or fine-tuning parameters. The board needs to:
- Know which AI systems are in use across the organisation and what decisions they influence
- Understand the risk profile. What happens if an AI system makes a wrong or biased decision?
- Ensure accountability. Who is responsible for AI system performance and outcomes?
- Set ethical boundaries. What decisions should AI never make autonomously?
- Monitor regulatory development. What's coming, and are we ready?
$7.4M
average cost of an AI ethics incident to enterprise reputation
Source: MIT Sloan Management Review, 2024
A Practical Governance Framework
Tier 1: AI Register
Start with visibility. Every AI system in the organisation goes on a register. From enterprise platforms to departmental tools to the ChatGPT subscriptions someone's expensing.
For each system, document:
- What it does and what decisions it influences
- What data it uses (including personal information)
- Who owns it and who monitors it
- Risk classification (low / medium / high / critical)
Start Here
Most NZ enterprises we work with are surprised by what they find when they build their AI register. Teams are using AI tools the organisation doesn't know about. That shadow AI is your most immediate governance risk.
Tier 2: Risk Framework
Classify AI systems by the decisions they influence:
| Risk level | Decision type | Examples | Governance requirement |
|---|---|---|---|
| Low | Informational | Content summarisation, search, document drafting | Register + basic monitoring |
| Medium | Advisory | Customer recommendations, risk scoring, resource allocation | Register + regular review + bias testing |
| High | Decisional | Hiring screening, credit decisions, clinical triage | Register + human oversight + explainability + audit trail |
| Critical | Autonomous | Safety systems, automated trading, infrastructure control | Full governance + board reporting + external audit |
Tier 3: Operating Procedures
For each risk level, define:
- Pre-deployment: Testing, validation, and approval requirements
- In-operation: Monitoring, performance metrics, and drift detection
- Incident response: What happens when something goes wrong
- Review cycle: How often the system is reassessed
Tier 4: Reporting
Regular reporting to the board on:
- AI systems in use and their risk classifications
- Any incidents or near-misses
- Regulatory developments and readiness
- Performance metrics and bias testing results
NZ-Specific Considerations
Te Tiriti and AI
For government agencies and organisations working with Māori data, AI governance must consider Te Tiriti o Waitangi obligations. This includes:
- Māori data sovereignty principles (as outlined by Te Mana Raraunga)
- Ensuring AI systems don't perpetuate existing inequities
- Engaging with Māori stakeholders in AI system design for services that affect Māori communities
Privacy Act 2020
Key implications for AI:
- Information Privacy Principle 1: Only collect personal information for a lawful purpose
- Information Privacy Principle 6: Individuals have the right to access information held about them, including AI-generated assessments
- Section 22: Automated decisions must be explainable when they significantly affect an individual
Small Market, High Visibility
New Zealand's small market means AI incidents are highly visible. A biased hiring algorithm in Auckland makes national news. A flawed risk model at a government agency triggers parliamentary questions. The reputational cost of getting AI wrong in NZ is disproportionately high relative to market size.
In a market this small, your AI governance framework isn't just risk management. Enterprises that can demonstrate responsible AI use win contracts that their ungoverned competitors can't.
Isaac Rolfe
Managing Director
Getting Started
If your organisation has no AI governance framework today, start with three actions:
- Build the register. Document every AI system in use. This typically takes 2-4 weeks and reveals surprising gaps.
- Classify the risks. Apply the four-tier framework to each system. Focus attention on high and critical.
- Assign ownership. Every AI system gets an owner. That owner is accountable for performance, monitoring, and governance compliance.
These three steps take 4-6 weeks and give the board a foundation to govern from. Everything else (policies, procedures, external audits) builds on this base.
- Does NZ have AI-specific legislation?
- Not as of October 2024. However, the Privacy Act 2020 already covers AI systems that process personal information, and the government's Algorithm Charter sets expectations for government agencies. AI-specific regulation is expected to follow the EU and Australian lead within 2-3 years.
- Who should own AI governance in an enterprise?
- AI governance should sit at the executive level, not buried in IT. Many organisations create a cross-functional AI governance committee with representatives from technology, legal, risk, and the business units using AI. A board subcommittee or risk committee should have oversight.
- How do I handle shadow AI (employees using AI tools the organisation doesn't know about)?
- Start with visibility, not prohibition. Build the register, communicate clear guidelines about what's acceptable, and provide approved alternatives for common use cases. Banning AI tools entirely just drives usage underground. Governance is about making the right path the easy path.

