The Privacy Act 2020 wasn't written with AI in mind. But its principles (purpose limitation, data minimisation, transparency, individual access) apply directly to every enterprise AI deployment in New Zealand. The gap between what the law requires and what most organisations are doing with AI is wider than it should be.
What You Need to Know
- The NZ Privacy Act 2020 applies to AI systems that collect, store, or use personal information. There's no AI-specific exemption, and the Office of the Privacy Commissioner (OPC) has been increasingly clear about expectations.
- Automated decision-making is the highest-risk area. If your AI system makes or significantly influences decisions about individuals, you have specific obligations around transparency, accuracy, and the right to challenge those decisions.
- Cross-border data transfers matter because most AI models are hosted overseas. Sending personal information to an AI provider's servers in the US or EU triggers the Privacy Act's cross-border disclosure requirements.
- The OPC published guidance on AI and privacy in 2024. It's principles-based, not prescriptive, which means enterprises need to interpret and apply it to their specific use cases.
- Getting this right isn't just compliance - it's competitive advantage. Enterprises with clear AI privacy practices build trust with customers, partners, and regulators faster than those scrambling to retrofit.
78%
of NZ consumers are concerned about how organisations use their personal data with AI
Source: Office of the Privacy Commissioner, Privacy Trust Survey, 2024
The Privacy Act Principles That Matter Most for AI
The Privacy Act has 13 information privacy principles (IPPs). Six are particularly relevant to AI:
IPP 1: Purpose of Collection
The principle: Personal information must be collected for a lawful purpose connected with the agency's functions, and the collection must be necessary for that purpose.
AI implication: If you collected customer data for service delivery, using it to train an AI model is a different purpose. You need to either: (a) ensure the AI use falls within the original collection purpose, (b) obtain fresh consent for the AI use, or (c) anonymise the data so it's no longer personal information.
Practical guidance: Audit your AI training data. For each dataset, trace it back to the original collection purpose. If there's a gap, address it before proceeding.
IPP 3: Collection of Information from Subject
The principle: Where possible, collect personal information directly from the individual concerned. Inform them of the purpose, intended recipients, and consequences of not providing the information.
AI implication: If your AI system collects information about individuals (through analysis of their behaviour, documents, or interactions), those individuals should know about it. A customer whose documents are processed by AI should be informed that AI is involved.
IPP 5: Storage and Security
The principle: Ensure personal information is protected against loss, unauthorised access, use, modification, or disclosure.
AI implication: AI systems create new attack surfaces. Model inputs and outputs may contain personal information. Conversation logs, embeddings, and vector databases are all storage of personal information that needs appropriate security controls.
IPP 6: Access to Personal Information
The principle: Individuals have the right to access their personal information held by an agency.
AI implication: If your AI system holds information about an individual (in knowledge bases, conversation history, or model context), they can request access to it. You need to be able to retrieve and provide it.
IPP 8: Accuracy
The principle: An agency must take reasonable steps to ensure personal information is accurate, complete, and not misleading.
AI implication: AI systems can generate inaccurate information about individuals (hallucinations, incorrect inferences). If this information is stored or used for decision-making, the accuracy obligation applies. Enterprises must have processes to verify AI-generated personal information.
IPP 11: Disclosure Outside New Zealand
The principle: Personal information can only be disclosed to a foreign person or entity if adequate protections are in place.
AI implication: Sending personal information to an AI provider hosted overseas (OpenAI in the US, Anthropic in the US, Google in multiple jurisdictions) constitutes cross-border disclosure. You need to ensure the receiving entity provides comparable privacy protections.
Automated Decision-Making: The High-Risk Area
The Privacy Act doesn't have a specific automated decision-making provision (unlike the EU's GDPR). But the OPC's 2024 guidance makes clear that automated decisions about individuals engage multiple privacy principles, particularly accuracy (IPP 8), access (IPP 6), and the obligation not to use information for purposes other than originally intended (IPP 10).
What Counts as Automated Decision-Making?
| Scenario | Risk Level | Requirements |
|---|---|---|
| AI summarises a document (no decisions about individuals) | Low | Basic logging, privacy notice |
| AI triages customer enquiries by urgency | Medium | Transparency about AI involvement, human oversight for escalations |
| AI assesses insurance claim eligibility | High | Full transparency, human review before final decision, right to challenge |
| AI scores job applicants or credit applications | High | Explanation of factors, human decision-maker, bias monitoring |
The Minimum Standard for High-Risk Automated Decisions
- Inform the individual that AI is involved in the decision
- Explain the key factors that influenced the AI's recommendation
- Ensure human review before the final decision is made
- Provide a mechanism to challenge the decision
- Monitor outcomes for bias across demographic groups
Cross-Border Data Transfers: A Practical Checklist
Most enterprise AI involves sending data to overseas providers. Here's the compliance checklist:
- Identify all cross-border data flows. Map where personal information goes: which AI providers, which jurisdictions, which data centres
- Assess the receiving jurisdiction's protections. Does the destination country provide comparable privacy protections? (The US does not have a federal privacy law equivalent to the Privacy Act)
- Implement contractual safeguards. Data processing agreements with AI providers should include: purpose limitation, data retention limits, security requirements, breach notification obligations, and sub-processor restrictions
- Inform individuals. Your privacy notice should disclose that personal information may be processed overseas by AI providers, and name the jurisdictions
- Consider data minimisation. Can you anonymise or pseudonymise data before sending it to the AI provider? This reduces risk and may remove the cross-border disclosure obligation entirely
- Evaluate sovereign hosting options. For high-sensitivity data, consider AI deployments hosted within NZ or Australia
89%
of NZ enterprises using AI send personal information to overseas AI providers
Source: NZ Privacy Commissioner, Technology and Privacy Report, 2024
Practical Compliance Framework
Step 1: AI Privacy Impact Assessment
Before deploying any AI system that processes personal information, conduct a privacy impact assessment (PIA). The OPC provides a PIA template. For AI, extend it with:
- What personal information does the AI process?
- Is the AI making or influencing decisions about individuals?
- Where is the personal information processed and stored?
- How long is personal information retained by the AI system?
- Can individuals request access to, correction of, or deletion of their information?
Step 2: Privacy Notice Updates
Update your privacy notice to cover AI use. Be specific:
- Which services use AI processing
- What personal information is processed by AI
- Whether AI is involved in automated decisions
- Where personal information is sent for AI processing
- How individuals can enquire about or challenge AI-assisted decisions
Step 3: Data Processing Agreements
Review agreements with AI providers. Ensure they cover:
- Purpose limitation (provider can only use your data for your specified purposes)
- Data retention and deletion
- Security standards and breach notification
- Sub-processor disclosure and approval
- Audit rights
Step 4: Operational Controls
Implement ongoing controls:
- Regular audits of AI data flows against the PIA
- Staff training on privacy obligations when using AI tools
- Incident response procedures for AI-specific privacy breaches
- Quarterly review of AI provider terms and data handling practices
- Bias monitoring for automated decision-making systems
The Quick Test
For any AI system processing personal information, ask: "If a customer asked what data we're sending to AI, where it goes, and how it's used in decisions about them - could we answer clearly and completely?" If not, you have a compliance gap.
What's Coming
The OPC is actively developing its approach to AI. Expect:
- More specific guidance on automated decision-making, likely by late 2025
- Industry-specific guidance for high-risk sectors (financial services, healthcare, government)
- Increased enforcement activity as AI deployments become more visible and complaints increase
- Potential legislative amendments. The Privacy Act is due for review, and AI-specific provisions are likely to be considered
The enterprises that build strong AI privacy practices now will be ahead of whatever regulatory changes come next, not scrambling to catch up.
- Does the Privacy Act apply to AI systems that only process business information, not personal information?
- No. The Privacy Act only applies to personal information (information about identifiable individuals). If your AI system exclusively processes business data with no personal information, the Privacy Act's obligations don't apply. But be careful: business documents often contain personal information (names, contact details, roles) even when the primary purpose is business analysis.
- Can we use personal information to fine-tune or train AI models?
- Only if it falls within the original collection purpose, or you have specific consent for model training, or the data is properly anonymised. The safest approach: anonymise training data so individuals can't be identified, or obtain explicit consent for AI training as a stated purpose.
- What happens if our AI provider changes their terms or data handling practices?
- You remain responsible for the personal information you share with providers. Monitor provider terms and data handling changes. Include contractual provisions requiring advance notice of material changes. If a provider's practices no longer meet your privacy obligations, you need to address the gap - by negotiating better terms, switching providers, or adjusting what data you share.
