New Zealand does not have AI-specific legislation. It will. The question is not whether regulation is coming but what form it will take, when it will arrive, and what enterprises should be doing now to prepare. Based on the regulatory signals, international precedents, and our conversations with policy makers, here is our assessment.
The Current State
As of late 2025, New Zealand's approach to AI governance relies on existing legislation applied to AI contexts:
The Privacy Act 2020 governs how personal information is collected, used, and disclosed. It applies to AI systems that process personal data, but was not designed with AI in mind. Key gaps: the Act does not specifically address automated decision-making, algorithmic transparency, or the right to human review of AI decisions.
The Human Rights Act 1993 prohibits discrimination. AI systems that produce discriminatory outcomes may violate the Act, but enforcement in AI contexts is untested. No significant case law exists.
The Consumer Guarantees Act 1993 and Fair Trading Act 1986 apply to AI products sold to consumers, but do not address the specific risks of AI (hallucination, bias, opaque decision-making).
The Financial Markets (Conduct of Institutions) Amendment Act introduces fair conduct obligations for financial institutions, including obligations around the use of technology in customer-facing decisions. This is the closest NZ has to sector-specific AI regulation.
Government guidance from the Algorithm Charter (2020) provides voluntary principles for government agencies using AI. Adoption has been inconsistent.
0
pieces of AI-specific legislation currently enacted in New Zealand
What Is Proposed
The NZ government has signalled movement on AI governance through several channels:
The Ministry of Business, Innovation and Employment (MBIE) has been developing an AI governance framework through consultation rounds in 2024 and 2025. The emerging direction favours a risk-based approach similar to the EU AI Act but adapted for NZ's scale and context.
The Privacy Commissioner has published guidance on AI and privacy, pushing the boundaries of the Privacy Act's applicability to AI. The Commissioner has advocated for transparency requirements and algorithmic impact assessments, though these do not yet have legislative backing.
The Chief Science Advisor has recommended a more proactive regulatory approach, citing the pace of AI adoption and the gaps in existing legislation.
What Is Likely
Based on the signals, here is what we believe is likely over the next 2-3 years:
Risk-Based Classification (High Probability)
New Zealand will likely adopt a risk-based framework that classifies AI systems by their potential for harm. High-risk applications (healthcare, criminal justice, financial services, employment) will face stricter requirements than low-risk applications (content generation, productivity tools).
This aligns with the EU AI Act approach and reflects the direction of MBIE's consultation. The NZ version will likely be less prescriptive than the EU model, reflecting NZ's preference for principles-based regulation and the practical constraint that a prescriptive regime requires regulatory capacity that NZ does not currently have.
Transparency Requirements (High Probability)
Requirements for organisations to disclose when AI is being used in decisions that affect individuals. Not technical transparency (explain the algorithm) but outcome transparency (tell people when an AI system influenced a decision about them and give them a path to review).
This is already the direction of the Privacy Commissioner's guidance and aligns with international norms. It is the lowest-friction regulatory intervention and the most likely to arrive first.
Algorithmic Impact Assessments (Medium Probability)
Requirements for organisations deploying high-risk AI to conduct and publish impact assessments. Similar to privacy impact assessments under the Privacy Act, but focused on AI-specific risks: bias, accuracy, fairness, and societal impact.
The Privacy Commissioner has advocated for this. MBIE's consultation has explored it. The question is whether it becomes mandatory or remains voluntary guidance.
Sector-Specific Requirements (Medium Probability)
Specific AI requirements for regulated sectors, building on existing regulatory frameworks. Financial services (through the conduct obligations), healthcare (through the Health Information Privacy Code), and government (through an updated Algorithm Charter) are the most likely sectors.
This approach lets NZ regulate AI without creating new regulatory bodies. Existing regulators (FMA, RBNZ, Privacy Commissioner, Health and Disability Commissioner) add AI-specific requirements to their existing mandates.
Comprehensive AI Legislation (Low Probability in 2-3 Years)
A standalone AI Act equivalent to the EU AI Act is unlikely within the next 2-3 years. NZ's legislative capacity, the political cycle, and the preference for incremental regulation all argue against comprehensive legislation in the near term.
The more likely path is incremental: transparency requirements first, then impact assessments, then sector-specific rules, eventually consolidating into a coherent framework. This matches NZ's regulatory style.
What Enterprises Should Do Now
Regardless of the specific timing and form of regulation, the direction is clear. Enterprises can prepare now:
Build Governance Into AI Systems
Every AI system deployed in production should have:
- An audit trail of what the AI did and why
- Access controls aligned with data sensitivity
- Monitoring for bias, accuracy, and performance drift
- A clear accountability structure (who is responsible when the AI fails)
These are good engineering practices regardless of regulation. When regulation arrives, organisations with governance already in place will be compliant by design.
Conduct Voluntary Impact Assessments
For high-risk AI applications (any system that affects decisions about individuals), conduct an impact assessment now. Document the risks, the mitigations, and the monitoring in place. This exercise surfaces risks that you may not have considered and creates documentation that regulators will eventually require.
Establish Transparency Practices
Start telling customers and stakeholders when AI is involved in decisions that affect them. "This assessment was prepared with AI assistance and reviewed by a senior assessor." This builds trust now and establishes practices that will be required later.
Monitor the Regulatory Landscape
Assign someone to track NZ AI regulatory developments. The MBIE consultations, Privacy Commissioner guidance, and sector-specific regulatory updates are the key sources. Quarterly review of regulatory developments is sufficient for most organisations.
Engage in Consultation
NZ's regulatory process is consultative. Enterprises that participate in MBIE consultations, Privacy Commissioner guidance processes, and sector-specific regulatory development have the opportunity to shape regulation that is practical and proportionate. Sitting out the consultation process and then complaining about the outcome is not a strategy.
The organisations that will fare best when AI regulation arrives are not the ones lobbying against it. Regulation follows the responsible actors, not the resistant ones.
Dr Tania Wolfgramm
Chief Research Officer
NZ AI regulation is coming. The form will likely be risk-based, transparency-focused, and incremental. For enterprises, the preparation is straightforward: build governance into your AI systems, conduct impact assessments, practise transparency, and engage with the regulatory process. The organisations that prepare now will find regulation an easy transition. The ones that wait will find it an expensive scramble.

