I've sat through dozens of board meetings over the past two years where AI was on the agenda. In almost every one, the conversation started with opportunity: revenue growth, cost reduction, competitive advantage. Governance, if it came up at all, was the last item before lunch. That order is backwards, and boards are starting to pay the price.
What You Need to Know
- Most boards are asking "what's our AI strategy?" as the first question. The first question should be "what's our AI governance framework?"
- Strategy without governance creates liability. Governance without strategy creates bureaucracy. Boards need both, but governance must come first because it defines the boundaries within which strategy can safely operate
- The EU AI Act is now in force. Boards of companies operating in European markets face personal liability for AI governance failures. This isn't theoretical
- Effective AI governance at board level requires three things: a clear risk taxonomy, defined accountability structures, and regular review cadences
- Boards don't need to understand how large language models work. They need to understand what decisions AI is making, what data it's using, and who is accountable when something goes wrong
78%
of boards discuss AI strategy at least quarterly, but only 23% have a formal AI governance framework in place
Source: Deloitte, State of AI in the Enterprise, 6th Edition, 2024
Why Governance First
The argument for strategy first is intuitive. You need to know what you're doing with AI before you can govern it. But this logic is flawed for the same reason it would be flawed in finance or cybersecurity. You don't wait until after you've made investments to establish your risk framework. You establish the framework, then invest within it.
AI governance defines the boundaries: what data can be used, what decisions can be automated, what oversight is required, what happens when things go wrong. Without those boundaries, every AI initiative is a liability waiting to materialise.
I've seen this play out concretely. A company I advise deployed an AI-driven pricing tool without governance guardrails. It worked brilliantly for six months. Then it recommended pricing changes that violated trade agreements in two jurisdictions. The technology worked exactly as designed. The governance didn't exist to catch the regulatory implications.
That's a board-level failure, not a technology failure.
What Boards Need to Understand
I'm not suggesting every director needs a computer science degree. I've been in technology for thirty years and I don't fully understand the mathematics behind modern AI models. That's not the point.
Board-level AI governance requires understanding three things.
What Decisions Is AI Making?
This sounds basic. In practice, most boards can't answer it. AI gets deployed in customer service, in pricing, in risk assessment, in hiring, and each deployment makes decisions that have business and ethical implications.
The board needs a register of AI-driven decisions, ranked by impact and risk. Not a technical inventory of models and algorithms, but a business-language catalogue of what AI is doing and what happens if it gets it wrong.
What Data Is AI Using?
Data governance and AI governance are inseparable. If your AI is trained on biased data, it produces biased outputs. If it uses personal data without proper consent, you have a privacy violation. If it accesses data from one jurisdiction to make decisions in another, you may have a cross-border compliance issue.
The technical governance of AI, model validation, data lineage, output monitoring, needs to connect directly to the board's risk register. When those two worlds don't talk to each other, you get technically compliant systems making ethically questionable decisions. Governance frameworks have to bridge both.
Isaac Rolfe
Managing Director
Who Is Accountable?
This is where most governance frameworks fall down. AI creates a diffusion of accountability. The data team prepared the training data. The engineering team built the model. The product team deployed it. The business unit uses it. When something goes wrong, who owns it?
The board's job is to ensure clear accountability chains exist before AI is deployed, not after an incident forces the question.
42%
of organisations have experienced at least one AI-related ethics incident, but fewer than 15% had governance frameworks to address it
Source: MIT Sloan Management Review and Boston Consulting Group, AI and Business Strategy Report, 2024
A Practical Governance Framework
Based on my experience across multiple boards, here's what effective AI governance looks like in practice.
Quarterly AI risk review. Not a technology update. A risk-focused review that covers: new AI deployments since last quarter, incidents or near-misses, regulatory changes, and effectiveness of existing controls. This should be a standing board agenda item, not an annual exercise.
Clear escalation thresholds. Not every AI decision needs board oversight. But high-impact, high-risk applications, anything touching pricing, hiring, customer eligibility, or regulatory compliance, should have defined escalation paths.
Annual governance audit. An independent review of your AI governance framework, conducted by someone who understands both the technology and the regulatory environment. Internal audit teams rarely have both competencies yet.
Board education programme. Directors don't need to code. But they do need enough literacy to ask informed questions. This means structured education, not a one-off presentation but ongoing development. The technology moves fast; board understanding needs to keep pace.
The Regulatory Reality
The EU AI Act is the most significant AI regulation globally, and it's now being enforced. For boards of companies that operate in European markets, this isn't hypothetical. The Act establishes risk categories for AI systems and imposes obligations that map directly to board governance responsibilities.
But even for companies outside the EU's direct jurisdiction, the direction is clear. Australia, New Zealand, Canada, and the UK are all developing AI governance frameworks. The regulatory trend is unmistakable, and boards that wait for local legislation before establishing governance are making the same mistake companies made with data privacy before GDPR.
Don't Wait for a Crisis
The boards that get AI governance right are doing it now, before an incident forces their hand. They're establishing frameworks, building literacy, and creating accountability structures that let their organisations pursue AI opportunity within defined risk boundaries.
The ones that don't will learn the same lesson every generation of board directors learns about governance: it's much cheaper to build it before you need it than to retrofit it after something goes wrong.
Strategy is important. But governance is the foundation it sits on. Boards that reverse the order are building on sand.

