Skip to main content

Enterprise AI Governance 101

You need AI governance before you scale AI. The minimum viable governance framework for enterprise.
1 November 2023·6 min read
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
Isaac Rolfe
Isaac Rolfe
Managing Director
Enterprise AI is scaling faster than enterprise AI governance. Every organisation we speak with has AI initiatives underway. Very few have governance frameworks that match the pace and scope of deployment. This needs to change before the first serious incident makes the decision for you.

Why Governance Before Scale

Tania: The instinct to defer governance until AI is more mature is understandable but wrong. Governance frameworks established early shape how an organisation develops its AI capability. Frameworks retrofitted after problems emerge are reactive, costly, and carry the additional burden of correcting established practices.
The evidence from adjacent domains is consistent. Organisations that establish information security governance early embed it into culture. Organisations that retrofit it after a breach spend more, achieve less, and face ongoing compliance challenges.
AI governance follows the same pattern.
Isaac: And pragmatically: governance enables speed. I keep saying this because it keeps being misunderstood. Without governance, every AI decision requires ad-hoc assessment. With governance, teams have pre-defined boundaries within which they can operate autonomously. The governance framework is what allows you to scale without bottlenecking every decision through leadership.

The Minimum Viable Governance Framework

You don't need comprehensive governance to start. You need enough governance to deploy responsibly and a plan to mature the framework as your AI capabilities grow.

1. Data Classification

Define what data AI systems can access and process.
ClassificationDefinitionAI Processing Rules
PublicPublished, non-sensitiveAny AI system, including external APIs
InternalNon-public but non-sensitiveApproved AI systems with appropriate security
ConfidentialSensitive business dataApproved AI systems with enhanced controls
RestrictedPII, health data, legally privilegedOn-premises or sovereign AI only, with explicit governance
This classification drives every subsequent governance decision. Get it right first.

2. Use Case Approval

Not every AI application carries the same risk. A writing assistant has different governance requirements than a clinical decision support tool.
Low risk: Internal productivity tools, content drafting, code assistance. Require standard security controls and user guidelines. Can be approved at team level.
Medium risk: Customer-facing tools, data analysis with business impact, process automation. Require security review, accuracy assessment, and human oversight design. Approved at department level.
High risk: Health, legal, financial, or safety-critical applications. Require comprehensive evaluation, bias assessment, governance committee review, and ongoing monitoring. Approved at executive level.

3. Accountability

Every AI system needs clear ownership:
  • Business owner: Accountable for the outcomes the AI produces
  • Technical owner: Accountable for the system's performance and reliability
  • Data owner: Accountable for the data the AI accesses
  • Governance reviewer: Responsible for ongoing compliance assessment
These roles map to existing positions. You don't need a new organisational structure - you need clear assignments.
<5%
of enterprises had formal AI governance frameworks in place by late 2023
Source: Gartner, Emerging Technology Roadmap for Large Enterprises, 2023

4. User Guidelines

Clear, practical guidelines for everyone who uses AI tools:
  • What data can be shared with AI systems (reference the data classification)
  • How AI outputs should be reviewed before use
  • How to report AI errors or concerns
  • What constitutes appropriate and inappropriate use
Tania: The guidelines should be practical, not aspirational. A one-page reference that people can actually use is more effective than a comprehensive policy that nobody reads. The detailed policy can exist, but the daily reference must be accessible.

5. Monitoring and Review

Governance isn't a one-time exercise. It requires ongoing attention:
  • Quarterly reviews of AI usage patterns and any incidents
  • Annual review of the governance framework against evolving best practice
  • Incident response procedures for when AI systems produce harmful or incorrect outputs
  • Feedback mechanisms that allow users to flag concerns easily

6. Ethical Principles

Document the ethical principles that guide your AI usage. These should be specific to your organisation and your context, not generic statements about "fairness" and "transparency."
For New Zealand organisations, this includes:
  • Commitment to Te Tiriti o Waitangi obligations in AI deployment
  • Respect for Māori data sovereignty principles
  • Assessment of cultural appropriateness, not just technical accuracy
  • Consideration of impacts on all communities served

Getting Started

Week 1: Audit current AI usage. What tools are being used? What data is being processed? Who is using them?
Week 2-3: Establish data classification. This is the foundation for everything else.
Week 4: Draft use case approval process and user guidelines. Keep them simple.
Week 5-6: Assign accountability for existing AI systems. Identify any systems that require elevated governance.
Ongoing: Quarterly review cycle. Update framework as capabilities and risks evolve.
Isaac: The goal is not perfection. The goal is "good enough to deploy responsibly today, with a clear path to maturity." Any governance is better than no governance. Start where you are.
Minimum Viable AI Governance Timeline
Source: RIVER Group, AI Governance Framework, 2023