Skip to main content

How to Evaluate an AI Use Case in 30 Minutes

A practical scoring framework for ranking AI opportunities by business impact and feasibility - no data science degree required.
20 September 2023·9 min read
Mak Khan
Mak Khan
Chief AI Officer
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
Your team has identified fifteen potential AI use cases. Your board wants to see a roadmap by next month. The question isn't whether AI can do these things. It's which ones you should do first, and which ones you should never do at all.

What You Need to Know

  • Not every process that can use AI should use AI. The evaluation framework matters more than the technology.
  • Use cases should be ranked on business impact × feasibility, not on how impressive the demo would look.
  • The best first AI capability is rarely the highest-value one. It's the highest-value one with the best data readiness and the strongest foundation potential.
  • A 30-minute scoring exercise with the right people in the room is more useful than a 3-month AI assessment.
  • Kill weak use cases early. The cost of pursuing a bad use case far exceeds the cost of evaluating more options.
85%
of AI projects that fail to deliver value were solving the wrong problem
Source: Gartner, Top Strategic Technology Trends for 2023, October 2022

The Five-Factor Scoring Framework

Score each potential use case on five factors (1-5 scale). Multiply impact factors by feasibility factors for a weighted score. The exercise takes 30 minutes per use case with the right people in the room.

Factor 1: Business Impact (1-5)

What's the measurable business value if this use case succeeds?
ScoreCriteria
5Revenue impact >$500K/year or fundamental competitive advantage
4Revenue impact $200-500K/year or significant cost reduction
3Efficiency gain (20-40% time saving) across a meaningful operation
2Moderate efficiency gain (<20%) or limited scope of impact
1Nice-to-have improvement with minimal measurable value
Key question: "If this worked perfectly, what number changes in the business?"

Factor 2: Data Readiness (1-5)

How accessible and structured is the data this use case needs?
ScoreCriteria
5Data is digital, structured, accessible via API, and well-governed
4Data exists digitally but needs some cleaning or integration work
3Data exists but is scattered across systems or partially unstructured
2Data exists but is largely unstructured (PDFs, emails, legacy formats)
1Data doesn't exist yet or lives entirely in people's heads
Key question: "Can we get 80% of the data we need in a machine-readable format within 4 weeks?"

Factor 3: Process Clarity (1-5)

How well-defined is the current process and the desired outcome?
ScoreCriteria
5Process is documented, outcomes are measurable, edge cases are known
4Process is understood but not fully documented; outcomes are clear
3Process is generally understood; some ambiguity in outcomes or scope
2Process is informal or varies significantly between teams
1Process is undefined or outcomes are unclear
Key question: "Can the domain expert explain the 'right answer' for a given input in under 5 minutes?"

Factor 4: Foundation Potential (1-5)

How much shared infrastructure does this use case build for future capabilities?
ScoreCriteria
5Builds document processing, knowledge base, AND integration patterns reusable by 3+ future capabilities
4Builds 2 of the above with clear reuse opportunities
3Builds 1 reusable component with some future potential
2Primarily standalone with limited shared infrastructure
1Completely standalone; no infrastructure reuse
Key question: "What does this use case build that makes the next three capabilities faster?"

Factor 5: Organisational Readiness (1-5)

Does the team and leadership context support success?
ScoreCriteria
5Strong executive sponsor, enthusiastic domain team, clear mandate
4Executive support and willing domain team; some organisational complexity
3General interest but no dedicated sponsor; team is open but busy
2Mixed organisational signals; potential resistance from key stakeholders
1No clear sponsor; active resistance or competing priorities
Key question: "Is there someone with authority who will fight for this initiative when it hits obstacles?"

Scoring and Ranking

Composite score = (Impact × Foundation Potential) × Average(Data Readiness, Process Clarity, Organisational Readiness)
This formula weights business impact and compound value highest, while using feasibility factors as a multiplier. A high-impact, high-foundation use case with moderate feasibility scores higher than a moderate-impact use case with perfect feasibility.

The Kill Zone

Any use case scoring below 2 on any single factor should be deferred or eliminated:
  • Impact < 2: Not worth the investment regardless of feasibility
  • Data readiness < 2: Foundation work needed before AI work
  • Process clarity < 2: Process redesign needed before AI work
  • Foundation potential < 2: Consider only after foundation is built
  • Organisational readiness < 2: Political work needed before technical work

The Sweet Spot

The ideal first AI capability scores:
  • Impact: 3-5 (meaningful business value)
  • Data readiness: 4-5 (ready to go)
  • Process clarity: 4-5 (well-understood)
  • Foundation potential: 4-5 (builds shared infrastructure)
  • Organisational readiness: 4-5 (clear sponsor and willing team)
This is why the first capability is often something like document processing or knowledge retrieval. Not the flashiest opportunity, but the one with the best combination of value, readiness, and foundation potential.
Run the Exercise as a Workshop
Gather the domain experts, the technical lead, and the executive sponsor in a room for 2 hours. Score 5-10 use cases together. The discussion is as valuable as the scores. It surfaces assumptions, reveals data gaps, and builds alignment on priorities.

Common Traps

The Demo Trap: Choosing the use case that would make the most impressive demo rather than the one with the highest composite score. Demos impress boards; composite scores produce value.
The Shiny Object Trap: Choosing the use case that uses the newest AI technology rather than the one that solves the most important problem. The technology should serve the problem, not the other way around.
The Easy Win Trap: Choosing the easiest use case (highest feasibility, lowest impact) to show quick results. Quick results with low impact teach the organisation that AI is trivial.
The Boil-the-Ocean Trap: Choosing the biggest, most transformational use case as the starting point. Start with something achievable that builds foundation, then compound toward the transformational vision.
How many use cases should we evaluate before choosing?
Score 8-12 candidates, expect 3-5 to score well, and start with 1. The evaluation itself is valuable. It builds shared understanding of where AI fits in your business, even for the use cases you defer.
Who should be in the scoring workshop?
Minimum: one domain expert per use case, one technical lead, one executive sponsor. Ideal: add a data owner and someone from the operations team who'll use the system daily. Keep it under 8 people. More than that and the exercise becomes a committee meeting.
What if our highest-scoring use case has low foundation potential?
Consider doing a foundation-building exercise alongside the first capability, even if it costs 20-30% more upfront. The compound savings from capabilities #2-4 will more than justify the investment. See our case study on compound AI value for the economics.