Skip to main content

The Governance Gap in AI Companies

AI companies are scaling fast with no governance framework. From a board perspective, the risk exposure is staggering.
12 June 2025·7 min read
Mike Ridgway
Mike Ridgway
Technology Growth Advisory
I've been a chairman and director through multiple waves of technology disruption. None have moved as fast as AI, and none have had this level of governance vacuum. The companies building and deploying AI are, in many cases, operating with no meaningful governance framework for the very technology that defines their business.

What You Need to Know

  • Most AI companies have no board-level governance framework for AI risk, ethics, or safety. The board can tell you about financial risk. Ask about model risk and you'll get blank stares
  • The speed-versus-governance tension is real but overstated. Good governance doesn't slow innovation. It prevents the catastrophic failures that slow innovation permanently
  • Directors have a fiduciary duty to understand AI risk in companies where AI is core to the business model. "The technology team handles it" is not a governance position
  • Regulatory frameworks are coming. Companies that build governance proactively will have competitive advantage over those forced to retrofit compliance
78%
of AI company boards have no formal AI governance framework or designated AI risk oversight process
Source: World Economic Forum, AI Governance Alliance Survey, 2025

What I See from the Boardroom

I sit on boards of technology companies that are integrating AI into their products and operations. The pattern is remarkably consistent.
The technology team is excited. They're shipping AI features rapidly. The commercial team is excited. AI is opening new market conversations. The board is excited because revenue associated with AI capability is growing.
Nobody is asking the governance questions.
What data are we training on, and do we have the rights to use it? What happens when the model produces an output that harms a customer? Who is accountable for AI-driven decisions that have legal or financial consequences? What's our liability exposure if our AI system discriminates, even unintentionally?
These aren't hypothetical questions. They're the questions that regulators, customers, and courts will be asking. And in my experience, most AI companies cannot answer them today.

The Speed Fallacy

The most common pushback I hear from AI company leaders is that governance will slow them down. In a market moving this fast, any friction is a competitive disadvantage.
I've heard this argument before. I heard it during the dot-com boom. I heard it from financial services companies before the GFC. The argument is always the same: we're moving too fast for governance. The outcome is also always the same: the companies that skipped governance paid far more to clean up the consequences than governance would have cost.
"Move fast and figure out governance later" is not a strategy. It's a liability that compounds with every customer, every deployment, and every model update.
Mike Ridgway
Technology Growth Advisory
Good governance for AI companies doesn't mean slowing down development. It means having clarity about:
What you will and won't do. Boundaries aren't constraints on innovation. They're the parameters within which innovation is safe. A clear AI ethics policy takes weeks to develop and saves months of crisis management.
Who is accountable. When an AI system produces an adverse outcome, there must be a clear chain of accountability from the model to the boardroom. Not to assign blame, but to ensure the right people are involved in the response.
How you monitor. AI systems degrade and drift. Governance requires ongoing monitoring, not just pre-deployment review. The model you approved six months ago may be behaving differently today.

The Board's Responsibility

If you're a director of a company where AI is material to the business, you have a fiduciary obligation to understand AI risk. Not at the technical level - you don't need to understand transformer architectures or training methodologies. At the governance level.

Questions Every Board Should Be Asking

What AI systems are we deploying, and what decisions do they influence? Most boards cannot answer this question comprehensively. AI is often deployed incrementally, without a centralised view of where it's being used and what it's affecting.
What's our data governance framework? Training data has legal, ethical, and commercial implications. Boards should understand what data their AI systems use, where it comes from, and whether the company has appropriate rights and protections.
What testing and validation processes exist? Before an AI system affects customers, what review does it go through? Who signs off? What standards are applied? In most AI companies, this is entirely at the discretion of the engineering team, with no board visibility.
What's our incident response plan? When - not if - an AI system produces a harmful output, what's the plan? Who gets notified? What's the communication protocol? How is the system updated?
$2.1B
in regulatory fines and legal settlements related to AI systems globally in the twelve months to March 2025, a 340% increase year-on-year
Source: Stanford HAI, AI Index Report, 2025

The Regulatory Trajectory

The EU AI Act is in force. The UK has published its AI regulatory framework. Australia is developing sector-specific AI regulation. New Zealand, while typically slower on technology regulation, is watching closely.
Companies that build governance frameworks now will find regulatory compliance straightforward. Companies that wait will face costly retrofitting.
More importantly, enterprise customers are increasingly requiring AI governance documentation as part of procurement. I've seen deals stall because the AI vendor couldn't provide adequate documentation about model governance, data provenance, and risk management. This isn't regulatory pressure. It's market pressure.

What Good AI Governance Looks Like

It doesn't look like a 200-page policy document. It looks like:
A board-level AI committee or designated director with AI oversight responsibility. Just as boards have audit committees and remuneration committees, companies with material AI exposure need governance structures that give AI risk the attention it warrants.
An AI risk register maintained alongside the company's broader risk register. Updated quarterly. Reviewed by the board.
Clear policies on data, ethics, and deployment that are short enough to be read, specific enough to be actionable, and reviewed frequently enough to remain current.
Regular AI governance reporting to the board, covering system performance, incidents, regulatory developments, and risk exposure.
The companies that get this right won't just avoid regulatory and reputational risk. They'll build trust with enterprise customers who increasingly demand governance maturity from their AI vendors. In a market where every company claims to do AI, governance will become a genuine differentiator.