Most enterprises know they need AI governance. Most don't know where to start. The common response is to form a committee, commission a policy document, and spend six months deliberating. By the time the policy is approved, the organisation has deployed three more AI tools without any governance at all.
What You Need to Know
- AI governance doesn't require a massive upfront investment. A functional framework can be stood up in 4-6 weeks with existing staff and no new tools.
- Start with a lightweight AI usage policy and an asset register. These two documents solve 80% of your immediate governance risk.
- Risk classification is the mechanism that makes governance proportionate. High-risk systems get heavy oversight, low-risk systems get light oversight. Without it, you either over-govern everything or under-govern everything.
- The biggest governance failure is not having one. The second biggest is building one so complex that nobody follows it.
- With the EU AI Act now in force, governance frameworks built today will have regulatory alignment built in from the start.
fewer than 10%
of organisations have a comprehensive AI governance framework in place
Source: MIT Sloan Management Review and Boston Consulting Group, AI and Business Strategy Survey, 2024
Phase 1: The Foundation (Weeks 1-2)
1.1 Create Your AI Asset Register
You can't govern what you don't know about. Before writing any policies, inventory every AI system in your organisation.
This includes:
- Enterprise AI platforms (Azure OpenAI, AWS Bedrock, etc.)
- AI features embedded in existing software (Copilot in Office, AI in Salesforce, etc.)
- Custom-built AI capabilities
- Consumer AI tools employees are using (ChatGPT, Claude, Gemini, etc.)
- Third-party AI-powered services
For each system, record:
| Field | What to Capture |
|---|---|
| System name | What is it? |
| Provider | Who built/operates it? |
| Business owner | Who in your organisation is accountable? |
| Data accessed | What data does it process? Any PII? |
| Users | Who uses it and how many? |
| Use cases | What decisions or outputs does it support? |
| Risk tier | Low / Medium / High (see Phase 2) |
| Status | In use / Piloting / Planned / Decommissioned |
Don't try to capture everything perfectly on the first pass. A rough inventory completed in a week is infinitely more valuable than a perfect inventory that takes three months.
1.2 Write Your AI Usage Policy
This is your single most impactful governance document. It tells every employee what they can and can't do with AI tools. Keep it short. Two pages maximum.
What it must cover:
- Approved tools. Which AI tools are sanctioned for use? Who approves new ones?
- Data boundaries. What data can and can't be shared with AI systems? Be specific: no client PII in consumer AI tools, no financial data in unapproved platforms, no confidential documents in public AI services.
- Use case boundaries. What decisions can AI inform? What decisions require human judgement regardless of AI output?
- Output handling. AI outputs must be reviewed before external use. AI-generated content must be identified as such when shared externally.
- Incident reporting. How to report AI errors, unexpected behaviour, or data concerns.
Template Structure
Write the policy as a one-page decision tree: "If you want to use AI for X, then Y applies." Employees shouldn't need to read a 20-page document to know whether they can paste a client email into ChatGPT. The answer should take 30 seconds to find.
1.3 Assign Accountability
Every AI system in your asset register needs three named owners:
- Business owner - accountable for the system's outcomes and business value
- Technical owner - accountable for the system's performance, security, and maintenance
- Data owner - accountable for the data the system accesses and its quality
For small organisations, one person may hold multiple roles. That's fine. The point is explicit accountability, not headcount. What you're preventing is the situation where an AI system produces a harmful output and nobody knows who's responsible.
Phase 2: Risk Classification (Weeks 3-4)
2.1 Define Your Risk Tiers
Not all AI systems need the same level of governance. A risk classification framework ensures proportionate oversight. Three tiers work for most organisations:
Low Risk - Monitor
- AI tools used for internal productivity (drafting, summarising, brainstorming)
- No automated decision-making affecting individuals
- No access to sensitive or personal data
- Examples: AI writing assistants, code completion tools, meeting summarisers
Requirements: AI usage policy compliance. Basic logging. Annual review.
Medium Risk - Control
- AI systems that inform (but don't make) decisions affecting people or operations
- Access to business-sensitive data
- Customer-facing AI interactions
- Examples: Internal knowledge assistants, document classification, customer service chatbots
Requirements: Everything in Low Risk, plus: source attribution, user feedback mechanisms, quarterly performance review, data access controls, transparency disclosures.
High Risk - Govern
- AI systems that directly influence decisions affecting individuals' rights, safety, or financial outcomes
- Processing of personal or highly sensitive data
- Regulatory or compliance implications
- Examples: HR screening tools, credit risk scoring, clinical decision support, claims assessment
Requirements: Everything in Medium Risk, plus: formal risk assessment, bias audits, human-in-the-loop review, full audit trails, incident response procedures, regulatory compliance documentation.
2.2 Classify Your Asset Register
Go through your AI asset register and assign a risk tier to each system. When in doubt, classify upward. It's easier to relax governance later than to tighten it after an incident.
67%
of AI-related incidents occurred in systems that had no formal risk classification
Source: OECD, AI Incidents Monitor, 2024
2.3 Build Your Risk Assessment Template
For medium and high-risk systems, create a simple risk assessment that covers:
- What could go wrong? Identify potential failure modes (incorrect outputs, bias, data leaks, availability issues)
- Who is affected? Employees, customers, third parties?
- What's the impact? Financial, reputational, legal, safety?
- What controls exist? Human review, monitoring, fallbacks?
- What's the residual risk? After controls, is the remaining risk acceptable?
This doesn't need to be a 50-page document. A structured one-page assessment per system is enough to start.
Phase 3: Operational Controls (Weeks 4-6)
3.1 Monitoring and Logging
Define what gets logged for each risk tier:
| Risk Tier | Logging Requirements |
|---|---|
| Low | Tool usage statistics (who uses what, how often) |
| Medium | Input/output logs, user feedback, performance metrics |
| High | Full audit trail: every input, output, retrieved source, decision, and user interaction |
For medium and high-risk systems, logs should be retained for a period appropriate to your industry's regulatory requirements. If unsure, 12 months is a reasonable default.
3.2 Incident Response
Define what constitutes an AI incident and how to respond:
What's an incident?
- AI produces materially incorrect output that affects a decision
- AI system processes data it shouldn't have access to
- AI output causes harm to an individual or the organisation
- AI system behaves unexpectedly or inconsistently
Response process:
- Contain. Stop the system from causing further harm (disable if necessary)
- Assess. Determine scope, impact, and root cause
- Notify. Inform business owner, affected parties, and regulators if required
- Remediate. Fix the root cause, not just the symptom
- Learn. Document the incident and update governance controls
3.3 Review Cadence
Governance without review is just documentation. Set a review schedule:
| Activity | Frequency |
|---|---|
| AI asset register update | Monthly |
| Low-risk system review | Annually |
| Medium-risk system review | Quarterly |
| High-risk system review | Monthly |
| AI usage policy review | Bi-annually |
| Full governance framework review | Annually |
Phase 4: Team Structure
Who Owns AI Governance?
You don't need a dedicated AI governance team on day one. You need a clear structure:
AI Governance Lead. A named individual (often the CTO, CRO, or Head of Data) who is accountable for the governance framework. This is not a full-time role at most organisations. It's a responsibility added to an existing leadership position.
AI Governance Working Group. A cross-functional group that meets monthly to review the asset register, discuss incidents, and update policies. Include representatives from:
- Technology / Engineering
- Legal / Compliance
- Risk
- Business units that use AI
- HR (for workforce and employment AI)
Tip: Start with 4-6 people. More than 8 and the group becomes a committee. Committees discuss. Working groups decide.
When to Hire Dedicated Governance Staff
Consider dedicated AI governance roles when:
- You have more than 10 AI systems in production
- You have high-risk AI systems in regulated industries
- EU AI Act compliance is required
- AI governance working group is meeting more than twice a month
Common Mistakes
Building governance in isolation from delivery. If the governance team doesn't talk to the AI delivery team, policies will be ignored. Embed governance into the delivery process, not alongside it.
Making governance too complex too early. A two-page usage policy enforced consistently beats a 40-page framework that nobody reads. Start simple. Add complexity as your AI maturity grows.
Governing the tools but not the outputs. Approving which AI tools employees can use is step one. Governing how AI outputs are used in decisions is where the real risk lives.
No enforcement mechanism. A policy without consequences is a suggestion. Define what happens when the policy is violated. Not to be punitive, but to be clear.
Treating governance as a one-time project. AI capabilities evolve. Regulations change. New tools appear. Governance is an operating function, not a project with an end date.
The Quick-Start Checklist
For organisations that want to move fast, here's the minimum viable governance framework:
- AI asset register (spreadsheet is fine)
- Two-page AI usage policy
- Three-tier risk classification (low / medium / high)
- Named owners for every AI system
- Monthly governance working group meeting
- Incident reporting email or form
- Quarterly review of medium/high-risk systems
This can be done in four weeks with existing staff. It won't be complete, but it will be functional. And a functional framework you improve over time is worth infinitely more than a perfect framework you never finish building.
- We're a small organisation with just a few AI tools. Do we really need governance?
- Yes, but proportionate governance. Your "framework" might be a one-page usage policy and a simple asset register. The point isn't complexity. It's intentionality. Knowing what AI tools you use, what data they access, and who's responsible is basic operational hygiene. It takes an afternoon to set up.
- How do we govern AI tools that employees bring in themselves (shadow AI)?
- The usage policy is your first line of defence. It tells employees what's approved and what's not. But shadow AI is inevitable. Complement the policy with technical controls (network monitoring for AI service endpoints, DLP policies) and make the approved tools good enough that employees don't feel the need to go elsewhere. Blocking without providing alternatives just pushes shadow AI deeper underground.
- Should we align our framework to ISO 42001?
- If you're in a regulated industry or serve enterprise clients who may require it, yes, use ISO 42001 as your structural guide from the start. If you're a smaller organisation or early in your AI journey, start with the practical framework in this guide and align to ISO 42001 later. The structures are compatible. You won't need to rebuild.
