Skip to main content

AI Governance Is Not Optional

ISO 42001 just landed. The EU AI Act is coming. And your enterprise AI initiatives need governance frameworks - not next year, now.
10 November 2023·8 min read
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
The first international AI management standard, ISO/IEC 42001, was just published. The EU AI Act is moving through final approvals. And most enterprises deploying AI still don't have a governance framework beyond "be careful."

What You Need to Know

  • AI governance isn't bureaucracy. It's the framework that lets you deploy AI faster and with more confidence. Ungoverned AI is slow AI, because every decision requires ad-hoc risk assessment.
  • ISO/IEC 42001 (published December 2023) provides the first international benchmark for AI management systems. It's voluntary now, but it signals where enterprise expectations are heading.
  • The EU AI Act introduces risk-based AI regulation that will affect any enterprise operating in or selling to European markets, including NZ/AU companies with global clients.
  • You don't need comprehensive governance to start. You need enough governance to deploy responsibly and a plan to mature your framework as your AI capabilities grow.
  • The enterprises with governance frameworks already in place will move faster when regulations arrive. They won't be scrambling to retrofit compliance.
29%
of organisations had deployed and were using generative AI by late 2023
Source: Gartner, Q4 2023 Enterprise Survey, October 2023
<5%
of enterprises had formal AI governance frameworks in place
Source: Gartner, Emerging Technology Roadmap for Large Enterprises, 2023

Why Governance Enables Speed

This is counterintuitive, so let me be direct: governance makes you faster, not slower.
Without a governance framework, every AI decision is bespoke. Should we use this data? Who approves this model? What happens when it's wrong? Each question requires a new conversation, a new risk assessment, a new leadership discussion. This is slow.
With a governance framework, these questions have pre-answered guidelines. Data classification determines what data AI can use. Approval workflows are defined. Error handling and escalation paths exist. The team can move without waiting for permission on every decision.
Governance is the infrastructure that allows AI teams to operate autonomously within safe boundaries, the same way financial controls let finance teams operate without CEO approval on every transaction.

The Three Pillars of AI Governance

1. Accountability: Who Is Responsible?

Every AI system needs clear accountability:
  • Model owner: responsible for the model's behaviour, accuracy, and maintenance
  • Data owner: responsible for the data the model accesses and its quality
  • Business owner: responsible for the outcomes the AI produces and the decisions it informs
  • Ethics reviewer: responsible for assessing fairness, bias, and societal impact
In practice, for most enterprises, these roles map to existing positions. The model owner is often the technical lead, the data owner is the data steward, the business owner is the department head. You don't need new roles; you need clear assignments.

2. Transparency: How Does It Work?

AI systems must be explainable at the level appropriate to their impact:
  • Low-risk applications (content suggestions, document summaries): basic logging of inputs and outputs
  • Medium-risk applications (process automation, recommendation systems): source attribution, confidence scores, audit trails
  • High-risk applications (clinical triage, financial decisions, compliance): full explainability, human-in-the-loop review, complete audit trails
The level of transparency should match the stakes. A document processing tool needs source attribution. A clinical triage system needs full chain-of-reasoning documentation.

3. Control: What Are the Boundaries?

Define what AI can and can't do, before it's deployed:
  • Data boundaries: which data can AI access? How is PII handled?
  • Decision boundaries: which decisions can AI make autonomously, and which require human approval?
  • Output boundaries: what filters, guardrails, and quality checks apply to AI outputs?
  • Escalation paths: when the AI encounters something outside its boundaries, what happens?

ISO/IEC 42001: What It Means for Enterprises

The publication of ISO/IEC 42001 in December 2023 creates the first international benchmark for AI management systems. Key implications:
It's voluntary, but expect it to become expected. Just as ISO 27001 (information security) moved from "nice to have" to "required for enterprise contracts," ISO 42001 will likely follow the same path. Enterprises in regulated industries should expect procurement teams to start asking about AI management system certification.
It follows Plan-Do-Check-Act. If your organisation already has ISO management system experience (27001, 9001, etc.), the 42001 framework will feel familiar. It's designed to integrate with existing management systems, not replace them.
It covers the full AI lifecycle. From design and development through deployment, monitoring, and decommissioning. This is broader than most current enterprise AI governance, which tends to focus only on deployment.

A Practical Starting Point

You don't need full ISO 42001 compliance to start governing AI. Here's a minimal framework:

Month 1: Foundations

  • AI usage policy (what data, which tools, what approval)
  • AI asset register (which AI systems are in use or planned)
  • Risk classification for each AI application (low / medium / high)
  • Accountability assignments for each AI application

Month 2-3: Operational Controls

  • Data classification extended for AI use cases
  • Security controls for AI deployments
  • Monitoring and logging requirements defined
  • Incident response procedures for AI-specific issues

Month 4-6: Maturity

  • Bias and fairness assessment process for high-risk applications
  • Regular model performance reviews (quarterly minimum)
  • Feedback mechanisms for users to report AI errors
  • Governance framework documentation aligned with ISO 42001 structure
The organisations that build governance frameworks now, even simple ones, will be the ones that can deploy AI at scale when the regulations arrive. The ones that wait will spend 2024 retrofitting compliance instead of building capability.
Dr Tania Wolfgramm
Chief Research Officer
Does AI governance apply to using ChatGPT in our organisation?
Yes. At the most basic level, your AI usage policy should cover consumer AI tools. What data can employees share with ChatGPT? Which tasks are appropriate? What approval is needed? This is your first governance step, and it addresses your most immediate risk.
How does AI governance relate to existing compliance frameworks (ISO 27001, SOC 2)?
AI governance extends and complements them. Your information security controls (27001) apply to AI data handling. Your audit controls (SOC 2) apply to AI logging and monitoring. ISO 42001 sits alongside these frameworks, adding AI-specific controls. You don't start from scratch. You build on what you have.
Is New Zealand likely to introduce AI-specific regulation?
As of late 2023, NZ's approach favours guidance over legislation, using existing frameworks (Privacy Act, public sector accountability) rather than new AI-specific laws. But the NZ government is watching the EU AI Act closely, and some form of AI-specific guidance for regulated industries is likely within 18-24 months.