Skip to main content

AI Capability Mapping for Enterprises: A Practical Framework

How to map your organisation's AI capabilities - current state, target state, and the gap between them. A scoring framework with prioritisation criteria you can use this quarter.
18 March 2025·13 min read
Isaac Rolfe
Isaac Rolfe
Managing Director
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
Most enterprises know they need "more AI." Few can articulate exactly which AI capabilities they have, which they need, and how to close the gap in a sequenced, prioritised way. AI capability mapping gives you that clarity: a structured view of where you are, where you need to be, and what to build next.

What You Need to Know

  • AI capability mapping is the process of cataloguing your current AI capabilities, defining your target state, and identifying the gaps that need to be closed, with clear priorities and sequencing.
  • This isn't an AI maturity assessment (which measures organisational readiness). It's a capability-level analysis that identifies specific AI capabilities across business functions and scores them on defined criteria.
  • The output is a prioritised roadmap: which capabilities to build first, which to defer, and which to skip entirely. It directly feeds your AI discovery sprint and investment decisions.
  • Run this exercise annually or after any significant strategic shift. It takes 2-3 days with the right stakeholders.
54%
of enterprises report having no structured method for identifying and prioritising AI use cases
Source: McKinsey, The State of AI in Early 2024

The Framework

AI capability mapping has four phases: Inventory, Assessment, Gap Analysis, and Prioritisation. Each builds on the previous.

Phase 1: Inventory - What Do We Have?

Catalogue every AI capability currently deployed or in development across the organisation. Most enterprises undercount. Shadow AI (teams using tools without central awareness) is common.
For each capability, document:
FieldDescriptionExample
Capability nameWhat it does, in plain language"Contract clause extraction"
Business functionWhich department or function it servesLegal, Procurement
StatusDeployed / In development / Pilot / PlannedDeployed
TechnologyModels, tools, and infrastructure usedGPT-4o via Azure OpenAI, custom pipeline
Data sourcesWhich systems provide input dataDocument management system, CRM
UsersWho uses it and how manyLegal team (12 users), weekly
OwnerWho is accountable for this capabilityHead of Legal Operations
Integration levelStandalone / Partially integrated / Fully integratedPartially integrated
Where to look:
  • IT procurement records (AI tool subscriptions)
  • Cloud provider usage logs (API calls to AI services)
  • Department heads (what tools are their teams using?)
  • Innovation or digital teams (pilots and experiments)
  • Shadow IT audit (consumer AI tools being used with corporate data)
Don't Skip Shadow AI
In our experience, 30-50% of AI usage in enterprises happens outside IT's visibility: teams signing up for AI tools with corporate credit cards, individuals using personal ChatGPT accounts with work data. Your inventory must capture these. They represent both capability (teams are solving real problems) and risk (ungoverned data exposure).

Phase 2: Assessment - How Good Are They?

Score each capability on five dimensions. Use a 1-5 scale.

Dimension 1: Business Impact

How much value does this capability deliver (or could it deliver at target state)?
ScoreLevelCriteria
1MinimalSaves minor time; no measurable business outcome
2UsefulSaves meaningful time for a small group; indirect business benefit
3SignificantMeasurable impact on a business KPI (cost, speed, quality, revenue)
4HighMajor impact on a core business process; clear ROI
5TransformativeEnables a fundamentally new capability or business model

Dimension 2: Technical Maturity

How production-ready is the capability?
ScoreLevelCriteria
1ExperimentalProof of concept only; no production infrastructure
2PrototypeWorking but fragile; manual processes; limited testing
3OperationalRunning in production with basic monitoring; some manual intervention
4RobustAutomated deployment, monitoring, and alerting; handles edge cases
5OptimisedContinuously improved; A/B tested; fully automated lifecycle

Dimension 3: Data Foundation

How strong is the data layer supporting this capability?
ScoreLevelCriteria
1ManualData is manually prepared for each use; no pipeline
2BasicSome automation; data quality issues; single source
3StructuredAutomated pipeline; multiple sources; basic quality checks
4ManagedReliable pipeline; data quality monitoring; schema versioning
5ExcellentReal-time pipeline; comprehensive quality; shared infrastructure

Dimension 4: Governance Compliance

How well does this capability align with your governance framework?
ScoreLevelCriteria
1UngovernedNo governance applied; unknown data handling
2BasicUsage policy acknowledged; no enforcement mechanism
3GovernedRisk classified; access controlled; basic audit trail
4CompliantFull governance alignment; regular reviews; monitoring
5EmbeddedAutomated compliance; governance integrated into deployment pipeline

Dimension 5: Scalability

How well does this capability scale across the organisation?
ScoreLevelCriteria
1Single useWorks for one team, one use case; not reusable
2AdaptableCould serve other teams with significant modification
3ReusableCore capability is reusable; configuration needed per team
4PlatformBuilt on shared infrastructure; easily extended
5Self-serviceOther teams can deploy and configure independently

Phase 3: Gap Analysis - What's Missing?

For each business function, compare current capabilities against target state. The gap analysis identifies three types of gaps:
Missing capabilities. Functions where AI could deliver significant value but no capability exists. Identify these by interviewing business function leaders: "What are your highest-volume, most repetitive, most error-prone processes?"
Underperforming capabilities. Capabilities that exist but score below target on one or more dimensions. A document processing capability that scores 4 on Business Impact but 2 on Data Foundation has a clear gap to close.
Ungoverned capabilities. Capabilities scoring 1-2 on Governance Compliance. These are immediate risk items regardless of their other scores.
Create a gap matrix:
Business functionCurrent capabilitiesTarget capabilitiesGap type
LegalContract review (basic)Contract intelligence (advanced)Underperforming
FinanceInvoice processing (pilot)Full AP automationUnderperforming
Customer serviceChatbot (deployed)Multi-channel intelligenceUnderperforming
OperationsNoneDemand forecastingMissing
HRCV screening (shadow AI)Talent intelligenceUngoverned + Missing

Phase 4: Prioritisation - What Do We Build Next?

Not all gaps are equal. Prioritise using three criteria:

Criterion 1: Value-to-Effort Ratio

Estimate the business value (from Impact scores) relative to the effort required to close the gap. High value, low effort gaps go first.

Criterion 2: Foundation Contribution

Does closing this gap create shared infrastructure that accelerates future capabilities? Capabilities that build reusable data pipelines, integration patterns, or governance frameworks score higher, even if their standalone ROI is modest.

Criterion 3: Risk Reduction

Does this gap represent active risk? Ungoverned capabilities and shadow AI should be addressed early regardless of their value-to-effort ratio, because the risk of data exposure or compliance failure is immediate.
Prioritisation matrix:
PriorityCriteriaAction
P1 - NowHigh value + foundation contribution, OR active governance riskBuild this quarter
P2 - NextHigh value, moderate effort, limited foundation contributionBuild next quarter
P3 - LaterModerate value, or high effort relative to valueBacklog - reassess quarterly
P4 - SkipLow value, high effort, no foundation contributionDon't build; revisit annually
3-5
is the optimal number of AI capabilities to actively develop simultaneously for mid-large enterprises
Source: RIVER Group, enterprise engagement data 2024

Running the Exercise

Who Needs to Be in the Room

  • Business function leaders (3-5). They know the processes, pain points, and potential
  • IT / Digital lead. They know the current technology landscape and shadow AI
  • Data / Analytics lead. They know the data landscape and quality issues
  • Governance / Risk representative. They assess compliance implications
  • AI strategy owner. They facilitate and own the output

Timeline

DayActivityOutput
Day 1 (half day)Inventory workshop: catalogue all current AI capabilities including shadow AICapability inventory
Day 1 (half day)Assessment: score each capability on five dimensionsScored capability register
Day 2 (half day)Gap analysis: identify missing, underperforming, and ungoverned capabilitiesGap matrix
Day 2 (half day)Prioritisation: rank gaps and define build sequencePrioritised roadmap
Day 3 (optional)Business case development for P1 capabilitiesInvestment proposals

Output Document

The final deliverable is a one-page capability map showing:
  1. Current state: All AI capabilities, scored and categorised by business function
  2. Target state: Desired capabilities, with target scores per dimension
  3. Gap summary: Missing, underperforming, and ungoverned capabilities
  4. Prioritised roadmap: P1/P2/P3/P4 with estimated effort and sequencing
  5. Foundation dependencies: Which P1 capabilities create shared infrastructure for P2+
This document becomes the input for quarterly AI discovery sprints, investment decisions, and platform team planning.

Common Mistakes

Mapping Technology Instead of Capability

Don't catalogue AI tools. Catalogue what the AI does for the business. "We use Azure OpenAI" is a technology statement. "We extract structured data from 500 contracts per month with 94% accuracy" is a capability statement. Map the capability, note the technology.

Ignoring Shadow AI

If you only map capabilities that IT knows about, you'll miss 30-50% of actual AI usage. Shadow AI is valuable signal. It tells you where teams have real problems they're solving with AI, outside official channels. Capture it, govern it, and decide whether to formalise or retire it.

Setting Unrealistic Target States

Not every business function needs a score of 5 on every dimension. Set target states based on the function's strategic importance and the realistic level of investment. A support function might target 3s across the board. A core revenue function might target 4-5 on Impact and Technical Maturity but accept 3 on Scalability initially.

Mapping Once and Filing It Away

The capability map is a living document. Review quarterly. New capabilities emerge, priorities shift, technology evolves. A map from 6 months ago is a historical artefact, not a strategic tool.
How is this different from an AI maturity assessment?
An AI maturity assessment evaluates organisational readiness: data, people, governance, leadership. Capability mapping evaluates specific AI capabilities and their gaps. Maturity assessment asks "are we ready?" Capability mapping asks "what should we build, in what order?" You need both; capability mapping is more actionable for investment decisions.
We have dozens of potential AI use cases. How do we keep the mapping manageable?
Group related use cases into capabilities. "Summarise customer emails," "classify support tickets," and "draft response templates" are three use cases but one capability: customer communication intelligence. Map at the capability level, not the use case level. You should have 15-30 capabilities on the map, not 100 use cases.
Should we hire a consultant to run this, or can we do it internally?
You can do the inventory and assessment internally. Your people know the business better than any consultant. External facilitation adds value in two areas: the gap analysis (an outside perspective catches blind spots and challenges assumptions) and the prioritisation (objectivity about which capabilities genuinely create platform value versus which are pet projects). A 2-day facilitated workshop with pre-work is the most efficient format.