Most enterprises know they need "more AI." Few can articulate exactly which AI capabilities they have, which they need, and how to close the gap in a sequenced, prioritised way. AI capability mapping gives you that clarity: a structured view of where you are, where you need to be, and what to build next.
What You Need to Know
- AI capability mapping is the process of cataloguing your current AI capabilities, defining your target state, and identifying the gaps that need to be closed, with clear priorities and sequencing.
- This isn't an AI maturity assessment (which measures organisational readiness). It's a capability-level analysis that identifies specific AI capabilities across business functions and scores them on defined criteria.
- The output is a prioritised roadmap: which capabilities to build first, which to defer, and which to skip entirely. It directly feeds your AI discovery sprint and investment decisions.
- Run this exercise annually or after any significant strategic shift. It takes 2-3 days with the right stakeholders.
54%
of enterprises report having no structured method for identifying and prioritising AI use cases
Source: McKinsey, The State of AI in Early 2024
The Framework
AI capability mapping has four phases: Inventory, Assessment, Gap Analysis, and Prioritisation. Each builds on the previous.
Phase 1: Inventory - What Do We Have?
Catalogue every AI capability currently deployed or in development across the organisation. Most enterprises undercount. Shadow AI (teams using tools without central awareness) is common.
For each capability, document:
| Field | Description | Example |
|---|---|---|
| Capability name | What it does, in plain language | "Contract clause extraction" |
| Business function | Which department or function it serves | Legal, Procurement |
| Status | Deployed / In development / Pilot / Planned | Deployed |
| Technology | Models, tools, and infrastructure used | GPT-4o via Azure OpenAI, custom pipeline |
| Data sources | Which systems provide input data | Document management system, CRM |
| Users | Who uses it and how many | Legal team (12 users), weekly |
| Owner | Who is accountable for this capability | Head of Legal Operations |
| Integration level | Standalone / Partially integrated / Fully integrated | Partially integrated |
Where to look:
- IT procurement records (AI tool subscriptions)
- Cloud provider usage logs (API calls to AI services)
- Department heads (what tools are their teams using?)
- Innovation or digital teams (pilots and experiments)
- Shadow IT audit (consumer AI tools being used with corporate data)
Don't Skip Shadow AI
In our experience, 30-50% of AI usage in enterprises happens outside IT's visibility: teams signing up for AI tools with corporate credit cards, individuals using personal ChatGPT accounts with work data. Your inventory must capture these. They represent both capability (teams are solving real problems) and risk (ungoverned data exposure).
Phase 2: Assessment - How Good Are They?
Score each capability on five dimensions. Use a 1-5 scale.
Dimension 1: Business Impact
How much value does this capability deliver (or could it deliver at target state)?
| Score | Level | Criteria |
|---|---|---|
| 1 | Minimal | Saves minor time; no measurable business outcome |
| 2 | Useful | Saves meaningful time for a small group; indirect business benefit |
| 3 | Significant | Measurable impact on a business KPI (cost, speed, quality, revenue) |
| 4 | High | Major impact on a core business process; clear ROI |
| 5 | Transformative | Enables a fundamentally new capability or business model |
Dimension 2: Technical Maturity
How production-ready is the capability?
| Score | Level | Criteria |
|---|---|---|
| 1 | Experimental | Proof of concept only; no production infrastructure |
| 2 | Prototype | Working but fragile; manual processes; limited testing |
| 3 | Operational | Running in production with basic monitoring; some manual intervention |
| 4 | Robust | Automated deployment, monitoring, and alerting; handles edge cases |
| 5 | Optimised | Continuously improved; A/B tested; fully automated lifecycle |
Dimension 3: Data Foundation
How strong is the data layer supporting this capability?
| Score | Level | Criteria |
|---|---|---|
| 1 | Manual | Data is manually prepared for each use; no pipeline |
| 2 | Basic | Some automation; data quality issues; single source |
| 3 | Structured | Automated pipeline; multiple sources; basic quality checks |
| 4 | Managed | Reliable pipeline; data quality monitoring; schema versioning |
| 5 | Excellent | Real-time pipeline; comprehensive quality; shared infrastructure |
Dimension 4: Governance Compliance
How well does this capability align with your governance framework?
| Score | Level | Criteria |
|---|---|---|
| 1 | Ungoverned | No governance applied; unknown data handling |
| 2 | Basic | Usage policy acknowledged; no enforcement mechanism |
| 3 | Governed | Risk classified; access controlled; basic audit trail |
| 4 | Compliant | Full governance alignment; regular reviews; monitoring |
| 5 | Embedded | Automated compliance; governance integrated into deployment pipeline |
Dimension 5: Scalability
How well does this capability scale across the organisation?
| Score | Level | Criteria |
|---|---|---|
| 1 | Single use | Works for one team, one use case; not reusable |
| 2 | Adaptable | Could serve other teams with significant modification |
| 3 | Reusable | Core capability is reusable; configuration needed per team |
| 4 | Platform | Built on shared infrastructure; easily extended |
| 5 | Self-service | Other teams can deploy and configure independently |
Phase 3: Gap Analysis - What's Missing?
For each business function, compare current capabilities against target state. The gap analysis identifies three types of gaps:
Missing capabilities. Functions where AI could deliver significant value but no capability exists. Identify these by interviewing business function leaders: "What are your highest-volume, most repetitive, most error-prone processes?"
Underperforming capabilities. Capabilities that exist but score below target on one or more dimensions. A document processing capability that scores 4 on Business Impact but 2 on Data Foundation has a clear gap to close.
Ungoverned capabilities. Capabilities scoring 1-2 on Governance Compliance. These are immediate risk items regardless of their other scores.
Create a gap matrix:
| Business function | Current capabilities | Target capabilities | Gap type |
|---|---|---|---|
| Legal | Contract review (basic) | Contract intelligence (advanced) | Underperforming |
| Finance | Invoice processing (pilot) | Full AP automation | Underperforming |
| Customer service | Chatbot (deployed) | Multi-channel intelligence | Underperforming |
| Operations | None | Demand forecasting | Missing |
| HR | CV screening (shadow AI) | Talent intelligence | Ungoverned + Missing |
Phase 4: Prioritisation - What Do We Build Next?
Not all gaps are equal. Prioritise using three criteria:
Criterion 1: Value-to-Effort Ratio
Estimate the business value (from Impact scores) relative to the effort required to close the gap. High value, low effort gaps go first.
Criterion 2: Foundation Contribution
Does closing this gap create shared infrastructure that accelerates future capabilities? Capabilities that build reusable data pipelines, integration patterns, or governance frameworks score higher, even if their standalone ROI is modest.
Criterion 3: Risk Reduction
Does this gap represent active risk? Ungoverned capabilities and shadow AI should be addressed early regardless of their value-to-effort ratio, because the risk of data exposure or compliance failure is immediate.
Prioritisation matrix:
| Priority | Criteria | Action |
|---|---|---|
| P1 - Now | High value + foundation contribution, OR active governance risk | Build this quarter |
| P2 - Next | High value, moderate effort, limited foundation contribution | Build next quarter |
| P3 - Later | Moderate value, or high effort relative to value | Backlog - reassess quarterly |
| P4 - Skip | Low value, high effort, no foundation contribution | Don't build; revisit annually |
3-5
is the optimal number of AI capabilities to actively develop simultaneously for mid-large enterprises
Source: RIVER Group, enterprise engagement data 2024
Running the Exercise
Who Needs to Be in the Room
- Business function leaders (3-5). They know the processes, pain points, and potential
- IT / Digital lead. They know the current technology landscape and shadow AI
- Data / Analytics lead. They know the data landscape and quality issues
- Governance / Risk representative. They assess compliance implications
- AI strategy owner. They facilitate and own the output
Timeline
| Day | Activity | Output |
|---|---|---|
| Day 1 (half day) | Inventory workshop: catalogue all current AI capabilities including shadow AI | Capability inventory |
| Day 1 (half day) | Assessment: score each capability on five dimensions | Scored capability register |
| Day 2 (half day) | Gap analysis: identify missing, underperforming, and ungoverned capabilities | Gap matrix |
| Day 2 (half day) | Prioritisation: rank gaps and define build sequence | Prioritised roadmap |
| Day 3 (optional) | Business case development for P1 capabilities | Investment proposals |
Output Document
The final deliverable is a one-page capability map showing:
- Current state: All AI capabilities, scored and categorised by business function
- Target state: Desired capabilities, with target scores per dimension
- Gap summary: Missing, underperforming, and ungoverned capabilities
- Prioritised roadmap: P1/P2/P3/P4 with estimated effort and sequencing
- Foundation dependencies: Which P1 capabilities create shared infrastructure for P2+
This document becomes the input for quarterly AI discovery sprints, investment decisions, and platform team planning.
Common Mistakes
Mapping Technology Instead of Capability
Don't catalogue AI tools. Catalogue what the AI does for the business. "We use Azure OpenAI" is a technology statement. "We extract structured data from 500 contracts per month with 94% accuracy" is a capability statement. Map the capability, note the technology.
Ignoring Shadow AI
If you only map capabilities that IT knows about, you'll miss 30-50% of actual AI usage. Shadow AI is valuable signal. It tells you where teams have real problems they're solving with AI, outside official channels. Capture it, govern it, and decide whether to formalise or retire it.
Setting Unrealistic Target States
Not every business function needs a score of 5 on every dimension. Set target states based on the function's strategic importance and the realistic level of investment. A support function might target 3s across the board. A core revenue function might target 4-5 on Impact and Technical Maturity but accept 3 on Scalability initially.
Mapping Once and Filing It Away
The capability map is a living document. Review quarterly. New capabilities emerge, priorities shift, technology evolves. A map from 6 months ago is a historical artefact, not a strategic tool.
- How is this different from an AI maturity assessment?
- An AI maturity assessment evaluates organisational readiness: data, people, governance, leadership. Capability mapping evaluates specific AI capabilities and their gaps. Maturity assessment asks "are we ready?" Capability mapping asks "what should we build, in what order?" You need both; capability mapping is more actionable for investment decisions.
- We have dozens of potential AI use cases. How do we keep the mapping manageable?
- Group related use cases into capabilities. "Summarise customer emails," "classify support tickets," and "draft response templates" are three use cases but one capability: customer communication intelligence. Map at the capability level, not the use case level. You should have 15-30 capabilities on the map, not 100 use cases.
- Should we hire a consultant to run this, or can we do it internally?
- You can do the inventory and assessment internally. Your people know the business better than any consultant. External facilitation adds value in two areas: the gap analysis (an outside perspective catches blind spots and challenges assumptions) and the prioritisation (objectivity about which capabilities genuinely create platform value versus which are pet projects). A 2-day facilitated workshop with pre-work is the most efficient format.

