AI has moved from "interesting technology" to "board-level strategic concern." If you're a director in 2026, you have governance responsibilities around AI whether your organisation has deployed it or not. This guide covers what you need to know, in plain language, without the technical jargon.
What You Need to Know
- AI governance is now a board responsibility, not a technology team concern. The NZ Government AI Framework (2025) and emerging international standards make AI oversight a governance obligation. Boards that delegate AI entirely to IT are exposed.
- You don't need to understand how AI works. You need to understand what it does, what it risks, and how it's governed. This guide focuses on the questions you should ask and the answers you should expect, not the technology itself.
- The biggest board-level AI risk in 2026 isn't a rogue AI. It's organisational AI that operates without governance. Employees using AI tools without policies, teams deploying AI without oversight, decisions influenced by AI without audit trails. This is where the real risk lives.
- AI inaction is also a risk. Boards that block AI adoption to avoid risk create a different risk: competitive irrelevance. The cost of AI inaction is compounding every quarter.
- Effective AI governance mirrors good corporate governance. If you understand risk management, audit, and strategy oversight, you understand 80% of what AI governance requires. The remaining 20% is AI-specific, and that's what this guide covers.
40%
of NZ enterprise boards now receiving regular AI-specific reporting
Source: NZTech, AI Readiness in Aotearoa 2025
Your Three Governance Responsibilities
1. Strategic Oversight
AI is a strategic capability, not a technology purchase. The board's role is to ensure the organisation has an AI strategy that:
- Aligns with business strategy. AI initiatives should serve business objectives, not exist as standalone technology experiments. If the AI programme can't articulate which business outcomes it serves, something is wrong.
- Is appropriately resourced. AI programmes that are underfunded relative to their ambition produce pilots that never scale. Boards should challenge both under-investment and over-investment.
- Has measurable outcomes. "We're doing AI" is not a strategy. "We're deploying three AI capabilities that will reduce claims processing time by 40% and save $2M annually" is a strategy. Demand specific metrics.
The question to ask: "What business outcomes will our AI programme deliver in the next 12 months, and how will we measure them?"
2. Risk Oversight
AI introduces risks that existing governance frameworks may not cover:
Data risk. AI systems process large volumes of data, potentially including personal information, commercially sensitive material, and culturally significant content. The board must ensure data handling meets legal requirements and community expectations.
Decision risk. AI systems increasingly influence or make decisions: claims approvals, risk assessments, customer communications. The board must understand which decisions are AI-influenced, what safeguards exist, and who is accountable when the AI gets it wrong.
Reputation risk. AI outputs that are biased, inaccurate, or inappropriate damage trust. The board must ensure appropriate testing, monitoring, and incident response for AI-generated content.
Dependency risk. Over-reliance on a single AI vendor or model creates strategic vulnerability. The board should understand vendor relationships and portability options.
Regulatory risk. AI regulation is evolving rapidly. The NZ Government AI Framework is principles-based today, but prescriptive regulation is likely within 2-3 years. Boards should ensure the organisation's AI governance is ahead of regulation, not scrambling to catch up.
The question to ask: "What is our AI risk register, who owns it, and when was it last reviewed?"
3. Ethical Oversight
AI governance has an ethical dimension that extends beyond legal compliance:
Fairness. AI systems can perpetuate or amplify existing biases in data. The board should ask whether AI systems are tested for bias across different demographic groups, including specific consideration of Māori and Pacific communities.
Transparency. When AI influences decisions that affect people (customers, employees, communities), those people deserve to know. The board should ensure the organisation has a clear position on AI transparency and disclosure.
Data sovereignty. For NZ organisations, this includes Māori data sovereignty considerations. AI systems that process data related to Māori communities, health, education, or cultural knowledge carry specific obligations that boards should understand.
The question to ask: "How do we ensure our AI systems are fair, transparent, and respect data sovereignty obligations?"
The Board AI Oversight Checklist
Use this checklist to assess your board's AI governance readiness:
Governance Framework
- Formal AI governance policy approved by the board
- AI risk register maintained and reviewed quarterly
- Clear accountability: who owns AI governance (name, not team)
- AI-specific incident response plan
- Board receives regular AI reporting (at least quarterly)
Strategic Alignment
- AI strategy documented and linked to business strategy
- AI investment approved as a programme, not individual projects
- Clear metrics for AI programme success (business outcomes, not technical metrics)
- Regular review of AI programme against strategy
Risk Management
- Data governance framework covers AI data use
- AI-influenced decisions identified and safeguarded
- Vendor risk assessed and documented
- Regulatory horizon scanning for AI-specific regulation
- Cyber security framework extended to cover AI-specific threats
Ethical Standards
- Bias testing programme for AI systems
- Transparency policy for AI-influenced decisions
- Māori and Pacific data sovereignty considerations documented
- Human oversight for high-stakes AI decisions
52%
of NZ enterprises with formal AI governance frameworks in 2025
Source: NZTech, AI Readiness in Aotearoa 2025
Ten Questions Every Director Should Ask
These are the questions that separate boards with effective AI oversight from those that are rubber-stamping technology decisions.
About Strategy
- "What is our AI strategy, and how does it connect to our business strategy?" If the answer starts with technology, not business outcomes, push back.
- "How many AI capabilities are in production, and what measurable value are they delivering?" Beware "we have exciting pilots" without production deployment or measured outcomes.
- "Are we building shared AI infrastructure, or running disconnected projects?" The foundation approach is the difference between compounding value and compounding cost.
About Risk
- "What decisions are AI-influenced, and what safeguards exist?" Every AI-influenced decision should have human oversight appropriate to its risk level.
- "Where is our AI training and operational data stored, and who has access?" Data sovereignty is a board-level concern, especially for NZ organisations.
- "What happens when the AI gets it wrong?" Incident response for AI should be as clear as incident response for a data breach.
About Governance
- "Who is accountable for AI governance, and do they have authority and budget?" Governance without resources is theatre.
- "When was our AI governance framework last reviewed and updated?" AI moves fast. A governance framework from 2024 needs updating in 2026.
About People
- "How AI-literate is our leadership team?" If the CEO and direct reports can't articulate AI opportunities and risks, the organisation is flying blind.
- "Are our people using AI tools today, and do we have policies governing that use?" The honest answer for most organisations is "yes, and no." Both need addressing.
Red Flags for Directors
Watch for these signals that AI governance is inadequate:
- No one can name the AI governance owner. If accountability isn't clear, governance doesn't exist.
- AI reporting focuses on activity, not outcomes. "We ran 12 experiments" means nothing without "and here's what we deployed and what it delivered."
- The AI strategy is a technology strategy. AI strategy should be business strategy enabled by AI, not a list of tools and models.
- Governance is blocking all AI activity. Governance should enable safe AI adoption, not prevent adoption entirely. If nothing is getting through governance, the framework is wrong.
- AI is being positioned as cost-free. Every AI capability has ongoing operational costs. If nobody is budgeting for operations, the deployed systems will degrade.
- No mention of data sovereignty or ethical considerations. In the NZ context, these are governance obligations, not optional extras.
You Don't Need to Be Technical
Effective AI governance doesn't require technical expertise. It requires the same skills you apply to any governance domain: asking clear questions, expecting measurable answers, ensuring accountability, and managing risk. If you can govern a financial audit, you can govern an AI programme.
Building Board AI Capability
If your board needs to upskill on AI governance:
- Start with a board briefing. A 2-hour session covering AI fundamentals, your organisation's AI programme, and governance requirements. Use an independent facilitator, not your AI vendor.
- Assign an AI-savvy director. Identify or recruit a director with enough AI understanding to challenge management effectively. This person doesn't need to be a data scientist. They need to understand AI strategy and risk.
- Establish a reporting cadence. Quarterly AI reporting to the board covering: programme status, capability metrics, risk register updates, governance framework changes, and emerging regulatory developments.
- Join a peer network. NZ has emerging director networks focused on AI governance. Peer learning from other boards is more valuable than vendor presentations.
The board's role in AI isn't to direct the technology. It's to ensure the organisation has the strategy, governance, and accountability to use AI responsibly and effectively. In 2026, that's not a nice-to-have. It's a governance requirement.
- Do we need AI expertise on the board?
- Not necessarily deep technical expertise, but you need at least one director who can critically evaluate AI strategy and risk. Many boards are adding this through targeted recruitment or advisory arrangements. The bar isn't "can build an AI model" - it's "can ask the right questions and evaluate the answers."
- Should the board approve every AI initiative?
- No. The board should approve the AI strategy and governance framework, then delegate execution within that framework. Individual AI capabilities should be approved through the governance framework, with the board receiving regular reporting on portfolio progress and risk. Board-level approval should be reserved for high-risk or high-investment initiatives.
- What's our liability if an AI system makes a harmful decision?
- The legal framework is still evolving, but the principles are clear: the organisation is responsible for decisions made by or influenced by its AI systems. Directors' duties of care and diligence extend to ensuring appropriate AI governance. The best protection is a strong governance framework with clear accountability, appropriate human oversight, and documented decision trails.

