Skip to main content

The EU AI Act: What NZ Enterprises Should Know

The EU AI Act is now law, the world's first major AI regulation. If your organisation has European customers, operations, or ambitions, this breaks down what it means for you.
25 August 2024·10 min read
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
Isaac Rolfe
Isaac Rolfe
Managing Director
The EU AI Act passed in March 2024 and takes effect in stages starting August 2024. It's the world's first comprehensive AI regulation, and it applies to any organisation that deploys AI systems affecting EU citizens, regardless of where that organisation is based. For NZ enterprises with European clients or operations, this is not a "watch and wait" situation.

What You Need to Know

  • The EU AI Act applies extraterritorially. If your AI system processes data about, or makes decisions affecting, people in the EU, you're in scope, even from New Zealand.
  • The Act uses a risk-based classification system. Most enterprise AI falls into "limited risk" or "high risk" categories, each with different compliance obligations.
  • Prohibited AI practices (social scoring, real-time biometric surveillance) took effect first. High-risk requirements phase in over 2025-2026.
  • NZ has no equivalent AI-specific legislation, but the Privacy Act 2020 and emerging government guidance already create obligations that overlap with parts of the EU framework.
  • The enterprises that benefit most are those that treat this as a governance accelerator, not a compliance burden.
85%
of enterprise AI use cases fall into limited or high-risk categories under the EU AI Act
Source: European Commission, AI Act Impact Assessment, March 2024

The Risk Classification System

The EU AI Act categorises AI systems into four risk tiers. Your compliance obligations depend entirely on which tier your system falls into.

Unacceptable Risk (Banned)

These practices are prohibited outright from February 2025:
  • Social scoring systems by governments
  • Real-time biometric identification in public spaces (with narrow law enforcement exceptions)
  • AI that exploits vulnerabilities of specific groups (age, disability)
  • Emotion recognition in workplaces and educational institutions
Most NZ enterprises won't encounter these, but if you're building AI for European public sector clients, check carefully.

High Risk

This is where most enterprise compliance work concentrates. AI systems are "high risk" when used in:
  • Employment - CV screening, recruitment scoring, performance evaluation
  • Credit and insurance - risk assessment, pricing, claims decisions
  • Education - student assessment, admissions scoring
  • Critical infrastructure - energy, transport, water management
  • Law enforcement - predictive policing, evidence assessment
High-risk systems must meet strict requirements: conformity assessments, risk management systems, data governance, technical documentation, transparency, human oversight, accuracy and robustness standards, and registration in an EU database.

Limited Risk

AI systems with limited risk (including chatbots, content generation tools, and recommendation systems) have transparency obligations. Users must be informed that they're interacting with AI. Deepfakes and AI-generated content must be labelled.
This is where most enterprise AI tools land: internal chatbots, document processing, knowledge retrieval, and customer-facing assistants.

Minimal Risk

AI systems that pose minimal risk (spam filters, AI-enhanced video games, basic automation) have no specific obligations beyond existing law.

What NZ Enterprises Need to Do

Step 1: Classify Your AI Systems

Map every AI system in your organisation against the EU risk categories. For each system, determine:
  • Does it process data about or affect EU citizens?
  • Which risk tier does it fall into?
  • What are the corresponding obligations?
Most NZ enterprises will find their systems fall into "limited risk" (transparency obligations) with some potential "high-risk" systems in HR, insurance, or financial services.

Step 2: Address Transparency Requirements

For limited-risk systems (which is most enterprise AI) the immediate obligation is transparency:
  • Clearly disclose when users are interacting with AI
  • Label AI-generated content
  • Provide information about how AI systems make decisions
  • Enable users to opt for human interaction where appropriate
These are good practices regardless of regulation. If your AI interfaces already follow trust-building design patterns, you're likely well-positioned.

Step 3: Prepare for High-Risk Compliance

If any of your AI systems fall into the high-risk category, start preparing now. The requirements include:
Risk management. A documented risk management system operating throughout the AI system's lifecycle. Not a one-time assessment, but ongoing identification, analysis, and mitigation of risks.
Data governance. Training, validation, and testing datasets must meet quality criteria. Bias detection and mitigation must be documented. Data provenance must be traceable.
Technical documentation. Detailed documentation of the system's design, development, and intended use. This is more thorough than most enterprises currently maintain.
Record-keeping. Automatic logging of system operations for traceability. Logs must be retained for a period appropriate to the system's purpose.
Human oversight. High-risk systems must be designed for effective human oversight. This means humans must be able to understand, monitor, and intervene in the system's operation.
Timeline
High-risk compliance requirements apply from August 2026 for most categories. That sounds like a long time. It isn't, especially for organisations starting from minimal governance infrastructure.

The NZ Context

New Zealand doesn't have AI-specific legislation, and the government's current approach favours principles-based guidance over prescriptive regulation. But the regulatory landscape is shifting:
The Privacy Act 2020 already covers many AI use cases. Automated decision-making that affects individuals triggers existing privacy obligations, particularly around transparency, accuracy, and the right to challenge decisions.
The Algorithm Charter (2020) commits government agencies to transparency about algorithmic decision-making. It's voluntary for the private sector, but it signals the direction of travel.
Trade implications. NZ's trade relationships with the EU mean regulatory alignment matters. As EU AI Act compliance becomes a market expectation, NZ enterprises that can demonstrate compliance gain a competitive advantage in European markets.
42%
of NZ enterprises export products or services that could be affected by EU AI regulation
Source: NZTE, NZ-EU Trade Profile, 2023

Practical Steps: A 6-Month Roadmap

Months 1-2: Assessment

  • Inventory all AI systems (including third-party tools and embedded AI features)
  • Classify each system under the EU AI Act risk framework
  • Identify which systems affect EU citizens or operate in EU markets
  • Review existing governance frameworks against EU requirements

Months 3-4: Gap Analysis and Planning

  • Document gaps between current governance and EU requirements for each in-scope system
  • Prioritise by risk tier (high-risk systems first)
  • Assess vendor compliance. Do your AI vendors meet EU requirements?
  • Develop a compliance roadmap with timelines aligned to EU Act phase-in dates

Months 5-6: Implementation Foundations

  • Implement transparency requirements for limited-risk systems (these apply first)
  • Begin technical documentation for high-risk systems
  • Establish data governance processes for AI training and validation data
  • Set up monitoring and logging infrastructure for in-scope systems

Don't Wait for NZ to Legislate

The temptation for NZ enterprises is to wait. We don't have AI-specific regulation, so why act now? Three reasons:
Market access. EU compliance is becoming a market expectation, not just a legal requirement. European enterprise customers will increasingly require AI governance attestation from their suppliers.
Regulatory direction. NZ will eventually adopt some form of AI governance framework. Building governance now means you're ahead of the curve, not scrambling to catch up.
Good practice. Most EU AI Act requirements (transparency, risk management, human oversight, documentation) are things well-run AI programmes should be doing anyway. The Act is codifying best practice, not inventing new burdens.
The organisations that treat the EU AI Act as a catalyst for better AI governance, rather than a compliance checkbox, will be the ones that deploy AI faster, more confidently, and with more trust from their customers and employees.
Does the EU AI Act apply if we only have a few EU customers?
If your AI system processes data about or makes decisions affecting EU citizens, the Act applies, regardless of volume. A recruitment AI that screens one EU applicant is in scope. In practice, enforcement will focus on larger-scale impacts, but the legal obligation exists from the first EU citizen affected.
What about AI tools we buy from US vendors?
The Act assigns obligations to both providers (who build AI systems) and deployers (who use them). As a deployer, you're responsible for ensuring the AI tools you use comply with applicable requirements. This means asking your vendors hard questions about their EU compliance roadmap.
How does this interact with the NZ Privacy Act?
There's significant overlap. The Privacy Act's requirements around automated decision-making, transparency, and individual rights complement the EU AI Act's requirements. Organisations that comply well with the Privacy Act are already partway to EU compliance. The main gaps are in technical documentation, formal risk management systems, and conformity assessments, which the Privacy Act doesn't specifically address.