In 2023, most enterprises didn't have an AI policy. By early 2025, most do. Progress? On paper. In practice, the gap between having a policy and operating by that policy is where real risk accumulates, and most organisations haven't closed it.
What You Need to Know
- Having an AI policy is necessary but insufficient. The policy is a document. Governance is the system that makes the policy operational: processes, tooling, accountability, and enforcement.
- The most common governance failure isn't "no policy." It's a policy that exists but isn't embedded in how teams actually work. Staff don't know it exists, workflows don't enforce it, and nobody is accountable for compliance.
- Operationalising governance requires three things: awareness (people know the policy), integration (workflows enforce the policy), and accountability (someone monitors compliance).
- The governance gap widens as AI adoption scales. One team using ChatGPT is manageable. Fifteen teams using different AI tools across different use cases with different data sensitivity is not, unless governance is operational.
81%
of organisations had an AI usage policy by late 2024, up from under 30% in 2023
Source: KPMG, AI Adoption in the Enterprise 2024
14%
of those organisations reported that the policy was consistently followed across all teams
Source: KPMG, AI Adoption in the Enterprise 2024
Where the Gap Lives
Gap 1: Policy Awareness
The policy was written. It was emailed to all staff. It sits on the intranet. And 70% of employees have never read it.
This isn't an awareness campaign problem. It's a design problem. AI policies written as 15-page legal documents get filed and forgotten. Policies need to be actionable at the point of decision: when a team member is about to upload client data to an AI tool, they need to know, in that moment, whether it's permitted and under what conditions.
What operational looks like: Short, role-specific guidance embedded in the tools and workflows people already use. A prompt in the AI tool that says "This workspace processes client data. Do not upload files classified as Confidential or above." Not a policy document they were emailed six months ago.
Gap 2: Workflow Integration
The policy says "AI outputs must be reviewed by a qualified professional before client delivery." In practice, there's no workflow mechanism that enforces this. Staff can generate AI output and send it directly to clients. The policy creates an obligation; the workflow doesn't enforce it.
This is the most dangerous gap. It relies entirely on individual compliance, and individual compliance degrades under time pressure, which is precisely when AI is most likely to be used.
What operational looks like: Workflow controls that make the policy the path of least resistance. AI-generated output is flagged in the document management system. Client-facing documents require a review stamp before they can be sent. The system enforces the policy; individuals don't have to remember it.
Gap 3: Data Classification for AI
The policy says "do not use AI with sensitive data." But the organisation hasn't clearly classified which data is sensitive in the context of AI use. Customer names? Probably fine. Customer financial records? Clearly sensitive. Customer email addresses? Depends on the context and the AI tool's data processing terms.
Without clear, AI-specific data classification, every staff member makes their own judgement call. Some are conservative and avoid AI entirely. Others are liberal and upload client files to consumer tools. Both outcomes are bad. One wastes AI's potential, the other creates risk.
What operational looks like: Data classification extended specifically for AI use cases. Clear categories: data that can be used with any AI tool, data that can only be used with enterprise-approved tools, and data that cannot be processed by AI at all. Classification visible at the point of data access, not buried in a policy document.
Gap 4: Accountability and Monitoring
The policy exists. Nobody monitors compliance. Nobody reports on it. Nobody is accountable for whether teams follow it. The policy becomes a liability shield ("we told them not to") rather than an operational control.
What operational looks like: Quarterly governance reviews. Usage analytics from enterprise AI tools (who's using what, with what data). Spot-check audits of AI-assisted outputs. A named governance owner who reports to leadership on compliance status. Not punitive, informative. The goal is to understand whether governance is working, not to catch people out.
The Scaling Problem
The governance gap is manageable at small scale. When 10 people use one AI tool for low-sensitivity tasks, informal governance works. The gap becomes dangerous as AI scales:
| Scale | Governance need |
|---|---|
| 1 team, 1 tool | AI usage policy is sufficient |
| 3-5 teams, multiple tools | Need workflow integration and data classification |
| 10+ teams, AI in core workflows | Need full operational governance: monitoring, accountability, enforcement |
| Organisation-wide AI adoption | Governance must be automated and embedded in every AI touchpoint |
Most enterprises are in the middle rows: enough AI adoption that informal governance is insufficient, not enough to justify a dedicated governance function. This is the danger zone. The policy exists, the adoption is growing, and the gap between the two is widening.
Closing the Gap
Step 1: Audit the Current State
Before building operational governance, understand the gap. For each element of your AI policy, assess:
- Awareness: Do the people this applies to know about it? How do you know?
- Integration: Is there a workflow mechanism that enforces it? Or does it rely on individual compliance?
- Accountability: Who monitors compliance? How often? What happens when there's a breach?
Score each policy element on these three dimensions. The elements that score low on all three are your highest risk.
Step 2: Embed Governance in Workflows
Take your highest-risk policy elements and build workflow controls. This doesn't mean bureaucracy. It means making the right thing easy and the wrong thing hard.
Examples:
- Enterprise AI tools configured to reject uploads of classified data types
- AI-generated content watermarked or flagged in document management systems
- Mandatory review checkpoints for AI-assisted client deliverables
- Automated logging of AI tool usage for audit purposes
Step 3: Assign Accountability
Every governance domain needs a named owner. Not a committee, a person. They don't do all the work; they're accountable for the outcome. They report to leadership quarterly on compliance status, gaps identified, and remediation actions.
For most organisations, this is a part-time role initially, someone in risk, compliance, or IT governance who adds AI governance to their portfolio. As AI adoption scales, it may become a dedicated function.
Step 4: Make Governance Visible
Governance that nobody sees is governance that nobody follows. Publish a quarterly AI governance report. Internal, brief, factual. What AI tools are in use. How they're being used. Where compliance is strong. Where gaps exist. What actions are planned.
Visibility creates accountability. When leadership sees the governance report, teams pay attention. When teams know their usage is monitored (not surveilled, monitored), behaviour aligns with policy.
The greatest risk isn't in organisations without AI policies - it's in organisations that believe having a policy means having governance. The policy is the intention; governance is the execution.
Dr Tania Wolfgramm
Chief Research Officer
- We wrote our AI policy 12 months ago. Is it still relevant?
- Almost certainly not in its current form. AI capabilities, tools, and risks have evolved significantly. Review and update your policy at minimum every 6 months. Pay particular attention to new AI tools being used (especially by teams adopting tools independently), new use cases that weren't envisioned when the policy was written, and changes in the regulatory landscape.
- How do we enforce governance without slowing down AI adoption?
- By embedding governance in the workflow, not adding it as a separate step. The goal is making the governed path the easiest path. Pre-approved tools with pre-configured data boundaries. Templates that include required review steps. Automated logging that doesn't require manual effort. Governance should be invisible when you're doing the right thing and visible when you're about to do the wrong thing.
- Should we hire a dedicated AI governance role?
- It depends on your scale. Under 5 AI use cases in production: extend an existing governance or risk role. Over 5 use cases or in regulated industries: a dedicated AI governance role is justified. The role is part-technical, part-policy: someone who understands both the AI technology and the governance framework. Rare profile, but essential at scale.
