Skip to main content

Measuring AI ROI Beyond the Pilot Phase

Standard ROI frameworks don't work for enterprise AI. A practical measurement approach that captures compound value, not just cost savings.
20 July 2025·9 min read
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
Isaac Rolfe
Isaac Rolfe
Managing Director
"What's the ROI?" is the most common question in enterprise AI, and the most commonly answered badly. Standard ROI frameworks, designed for capital expenditure with predictable returns, systematically undervalue AI investments by missing the compound effects. Here's a framework that works.

Why Standard ROI Fails for AI

Standard ROI calculates: (Gain from Investment − Cost of Investment) / Cost of Investment.
For a new machine or software licence, this works. The gain is predictable, the cost is knowable, and the investment is self-contained.
AI investments break all three assumptions:
The gain isn't self-contained. An AI capability built for one team often benefits others. The claims processing model improves accuracy for the claims team but also generates data insights that the underwriting team uses.
The gain compounds. The second AI project costs less and delivers more because it builds on the first. A simple ROI calculation treats each project independently and misses this.
The cost shifts over time. AI capabilities become cheaper to operate as models improve and infrastructure matures. The year-one cost bears little resemblance to the year-three cost.
42%
of enterprises report difficulty measuring AI ROI with standard financial frameworks
Source: Deloitte, AI Value Framework Survey 2025

The Three-Layer Framework

We use a three-layer framework that captures the full value of AI investments:

Layer 1: Direct Value (What Standard ROI Measures)

This is the easy part. Direct, measurable value from a specific AI capability:
  • Time savings: Hours reduced per process
  • Cost reduction: FTE equivalents, error reduction, cycle time improvement
  • Revenue impact: Increased throughput, higher conversion, new capability
Every AI investment should have Layer 1 value. If it doesn't, the investment shouldn't proceed.
Layer 1 Is Necessary but Insufficient
Layer 1 alone typically justifies AI investments at a 2-4× return. But it captures only 30-40% of the actual value delivered. The compound and strategic layers are where the real returns live.
Where Enterprise AI Value Actually Lives
Source: BCG, AI Platform Economics 2025; RIVER Group

Layer 2: Compound Value (What Standard ROI Misses)

This is the value created by AI investments building on each other:
  • Infrastructure reuse: Data pipelines, model hosting, and governance frameworks shared across projects
  • Speed improvement: Each subsequent AI project is faster and cheaper than the last
  • Data compounding: Data generated by one AI capability improves other capabilities
  • Capability transfer: Skills developed in one project apply to the next
How to measure:
  • Track cost-per-project over time (should decrease 30-50% per project)
  • Track time-to-deployment over time (should decrease similarly)
  • Track cross-project data reuse (number of downstream consumers per data pipeline)
47%
average cost reduction from first to third AI project when built on shared infrastructure
Source: BCG, AI Platform Economics 2025

Layer 3: Strategic Value (What ROI Can't Measure)

This is the value that changes the organisation's competitive position:
  • Speed to opportunity: How fast can you deploy AI to a new use case?
  • Talent attraction: AI-capable organisations attract better talent
  • Market positioning: Being known as AI-forward influences customer and partner decisions
  • Optionality: Each AI capability creates options for future capabilities that don't exist yet
Layer 3 is hard to quantify but easy to identify. The question isn't "what's the ROI?" but "what's the cost of not having this capability when we need it?"
The most valuable aspect of enterprise AI is often the option value: the future capabilities that become possible because you built the foundation. You can't put a number on a capability that doesn't exist yet, but you can recognise that the organisations with AI foundations will be able to respond to opportunities that others simply can't.
Dr Tania Wolfgramm
Chief Research Officer

Practical Measurement

Before You Start: Define the Baseline

Before any AI investment, measure the current state:
  • Process time (end-to-end, not just the step AI will replace)
  • Error rate and rework cost
  • Staff time allocation (how much is spent on the task vs higher-value work)
  • Customer or stakeholder satisfaction with the process
Without a baseline, you can't measure improvement. This sounds obvious but is skipped in most AI projects.

During: Track Leading Indicators

Don't wait 12 months for ROI. Track leading indicators monthly:
  • Adoption rate: What percentage of eligible users/processes are using the AI?
  • Accuracy: Is the AI output meeting quality thresholds?
  • Time savings: Are processes actually faster, or is time being spent on review/correction?
  • User satisfaction: Do users trust and value the AI output?
The Adoption Trap
An AI system with 95% accuracy but 20% adoption delivers almost no value. Adoption is the multiplier that turns capability into impact. Track it obsessively.

After: Measure All Three Layers

At 6 months and 12 months, measure across all three layers:
LayerMetricHow to measure
DirectCost savingsBefore/after comparison of process costs
DirectTime savingsBefore/after comparison of process duration
DirectQuality improvementBefore/after error rates
CompoundProject speedTime and cost of second project vs first
CompoundInfrastructure reuseComponents shared across projects
CompoundData reuseDownstream consumers of AI-generated data
StrategicSpeed to opportunityTime to deploy new AI capability
StrategicTalent impactRecruitment and retention in AI-enabled teams
StrategicMarket positionCustomer/partner perception of AI capability

The Reporting Framework

For the Board

One page. Three numbers. One narrative.
  1. Direct ROI: X:1 return on this year's AI investment (Layer 1)
  2. Compound index: Each new AI project is Y% faster and Z% cheaper than the last (Layer 2)
  3. Capability score: We can deploy a new AI use case in N weeks (Layer 3)
Narrative: Where we are in building AI as an organisational capability, what we've learned, and where we're heading.

For the CFO

Total cost of AI programme. Direct value delivered (hard numbers). Compound savings (project-over-project improvement). Cost avoidance (what it would have cost to build each project independently).

For the CTO

Technical metrics: infrastructure utilisation, model performance, system reliability, technical debt score. Cross-project metrics: shared component reuse, API call volumes, data pipeline throughput.
Stop trying to justify AI with a spreadsheet that treats every project independently. If your measurement framework doesn't capture compound value, it's lying to you about the real returns.
Isaac Rolfe
Managing Director
What ROI should I expect from an AI investment?
Layer 1 (direct value) typically returns 2-4× within 12 months for well-chosen use cases. When you include Layer 2 (compound value from shared infrastructure), the programme-level ROI reaches 5-10× over 24 months. Layer 3 (strategic value) is harder to quantify but is often the most important driver of executive support.
How do I get budget approval when the compound value hasn't materialised yet?
Fund the first project on Layer 1 value alone. It should justify itself independently. Use the compound value argument for the second and third projects, pointing to the infrastructure already built. By project three, the compound effect is measurable and the budget conversation shifts from "should we invest?" to "how fast should we invest?"
What if the AI ROI is negative?
Investigate at the layer level. Negative Layer 1 usually means the wrong use case was chosen or adoption is low. Negative Layer 2 usually means the foundation isn't being reused (isolated projects). If both are negative after 12 months, the AI programme needs a reset, not more investment.