We've both sat in board meetings where AI is on the agenda. The pattern is predictable. Someone presents a landscape slide. Someone asks about risk. Someone asks about cost. The CEO says something encouraging. And the board agrees to "monitor developments and report back next quarter." Nothing happens. The conversation felt productive. It wasn't.
What You Need to Know
- Most board conversations about AI are information-sharing exercises that don't produce decisions
- The productive board conversation starts with "what's the cost of inaction?" not "what's the cost of AI?"
- Boards need three things to make an AI decision: a bounded first step, clear success criteria, and a timeline
- The CEO's role is to frame AI as a strategic imperative, not a technology experiment
82%
of boards have discussed AI, but only 23% have approved a specific AI initiative
Source: Deloitte Board Governance Survey, 2024
18 months
average delay between first board discussion of AI and first funded initiative
Source: McKinsey, 2024
The Wrong Conversation
"Tell Us About AI"
Board members are smart, busy people who are behind on AI. Not because they're slow, but because AI moves at a pace that's hard to track from a governance seat. So the first board conversation is usually educational: "What is generative AI? What can it do? What are the risks?"
This is necessary. It's also where many boards get stuck. The educational conversation is comfortable. It doesn't require a decision. And because AI keeps evolving, there's always another development to discuss next quarter. The education loop becomes a way to defer action while appearing engaged.
"What Are the Risks?"
The risk conversation is the second trap. Boards are designed to manage risk, so they naturally gravitate toward it. "What about data privacy? What about bias? What about regulatory exposure? What about job displacement?"
These are legitimate questions. They become a trap when they're used as reasons to wait rather than conditions to plan for. Every enterprise technology adoption has risks. The question isn't whether there are risks. It's whether the risks are manageable and whether the risk of inaction is greater.
The real risk isn't getting AI wrong. It's spending two years discussing it while your competitors spend two years using it. The cost of inaction compounds faster than the cost of careful action.
Isaac Rolfe
Managing Director
The Right Conversation
Start with the Cost of Inaction
Reframe the conversation from "what will AI cost us?" to "what will it cost us to wait?"
Every month that passes, competitors are building AI capability, customers are expecting AI-powered services, and the talent market is getting more competitive. The cost of inaction isn't zero. It's the opportunity cost of not building capability while others do.
This reframe shifts the board from a risk-management stance to a strategic-investment stance. It changes the question from "should we?" to "how fast?"
Present a Bounded First Step
Boards can't approve "an AI strategy." That's too abstract. They can approve a specific initiative with a defined scope, budget, timeline, and success criteria.
"We propose a 12-week discovery and pilot programme focused on document processing in our claims team. Budget: $150,000. Success criteria: 30% reduction in processing time and 80% team adoption. Decision point: at week 12, the board decides whether to scale."
This is decidable. It has a clear cost, a clear measure, and a clear exit. The board can say yes without committing to an undefined AI journey.
Define the CEO's Role
The CEO needs to own the AI narrative at the board level. Not the CTO. Not the Head of Innovation. The CEO. Because AI is a strategic initiative, not a technology experiment, and the board takes its cue from who presents it.
The CEO doesn't need to understand the technical details. They need to articulate three things:
- Why AI matters for this organisation specifically (not generically)
- What the first step looks like and what it will cost
- What competitive risk exists if the organisation doesn't act
When the CTO presents AI to the board, it reads as a technology initiative. When the CEO presents it, it reads as a strategic imperative. Same content. Different signal.
Tim Hatherley-Greene
Chief Operating Officer
Build Confidence Through Cadence
The board doesn't need to make one big AI decision. They need to make a series of small, informed decisions. Approve the discovery. Review the findings. Approve the pilot. Review the results. Approve the scale.
Each decision builds on evidence from the previous one. The board's confidence grows incrementally, grounded in data from their own organisation, not from analyst reports about other companies.
What Good Boards Do
The boards that move effectively on AI share three characteristics:
They appoint a board-level AI sponsor. One director with enough curiosity and capability to stay current on AI developments and translate them for the wider board. This person isn't the AI expert. They're the board's bridge to the AI programme.
They set a decision timeline. "We will make a decision on our AI approach by [date]." Deadlines prevent the education loop. They force the management team to bring a proposal, not just an update.
They accept bounded risk. A $150,000 pilot with defined success criteria is not a bet-the-company risk. It's an investment in learning. Boards that treat every AI expenditure as high-risk will never build capability. The right framing: "What's the maximum we could lose, and can we afford that learning?"
The board conversation about AI isn't about technology. It's about competitive positioning, organisational capability, and the cost of waiting. Frame it that way, present a bounded first step, and put the CEO in the chair. The board will decide.

