Skip to main content

Why Your AI Strategy Shouldn't Start with AI

The best enterprise AI strategies start with problems, not models. The technology-first approach is a trap, and most organisations fall straight into it.
12 April 2023·8 min read
Isaac Rolfe
Isaac Rolfe
Managing Director
"We need an AI strategy" has become the most common request in enterprise boardrooms. But the companies getting real value from AI didn't start with an AI strategy. They started with a business problem and worked backwards.

What You Need to Know

  • Starting with "we need AI" leads to solutions looking for problems. Starting with "we need to process claims 60% faster" leads to AI that actually ships.
  • The technology-first approach produces impressive demos that die in production. The problem-first approach produces unglamorous tools that compound value.
  • Enterprise AI success correlates most strongly with problem clarity, not model sophistication or team size.
  • The best AI roadmap is a list of business problems ranked by impact and data readiness, not a list of AI capabilities ranked by impressiveness.
  • You don't need a Chief AI Officer. You need a clear-eyed view of where your organisation's knowledge gets stuck.
85%
of AI projects fail to deliver intended business outcomes
Source: Gartner, Top Strategic Technology Trends for 2023, October 2022

The AI Strategy Trap

Here's how it typically unfolds. A board member reads about GPT-4 passing the bar exam. The CEO tasks the CTO with "developing an AI strategy." The CTO assembles a team, evaluates vendors, runs proofs of concept, and six months later presents a strategy document full of "using large language models" and "implementing AI-driven workflows."
The document is technically sound. It's also almost entirely disconnected from the business problems that actually matter.
This is the AI strategy trap: when you start with the technology, you optimise for the technology. You end up with a strategy that answers "how should we use AI?" instead of "how should we solve our most expensive problems?"

The Problem-First Alternative

The companies we see getting genuine value from AI don't have "AI strategies." They have business strategies that happen to use AI as a tool.
The difference is more than semantic. A problem-first approach changes everything:
Who leads the work. Not the CTO alone, but a cross-functional team that includes the people who actually do the work. The claims team knows where claims get stuck. The legal team knows which contract review steps are mechanical. The operations team knows which reports take three days when they should take three hours.
What gets prioritised. Not the most impressive AI capability, but the highest-value problem with the cleanest data. Often, this is something boring: document extraction, knowledge retrieval, triage workflows. These aren't conference-talk material. They're revenue material.
How success is measured. Not "we deployed an AI model" but "claims processing time dropped from 4 days to 1.5 days" or "our consultants spend 30% less time on document review." Measurable outcomes, not technology milestones.

A Better Framework

Instead of "develop an AI strategy," try this:

Step 1: Map Your Knowledge Friction

Where does knowledge get stuck in your organisation? Where do smart people spend time on work that doesn't need their expertise? Where do decisions wait for information that already exists somewhere in the business?
This exercise takes 2-3 days with the right people in the room. It produces a map of your highest-value opportunities, regardless of whether AI is the right solution.

Step 2: Rank by Impact × Readiness

For each friction point, assess two things:
  • Business impact - how much value would solving this create? (Revenue, speed, quality, risk reduction)
  • Data readiness - how accessible and structured is the data needed to solve this?
High impact + high readiness = start here. High impact + low readiness = invest in data first. Low impact = don't bother, regardless of how cool the AI demo looks.

Step 3: Scope a Discovery Sprint

Take your top opportunity and run a structured discovery, typically 4-6 weeks. Not a proof of concept. A discovery. The output isn't a demo; it's a clear specification of what AI needs to do, what data it needs, how it integrates into existing workflows, and what success looks like.
Discovery Before Demo
We've seen too many enterprises skip discovery and jump to building a demo. Demos impress executives. Discoveries produce things that actually work. The companies that run discovery first consistently ship faster and waste less.

Step 4: Build the Foundation, Not Just the First Thing

Here's where most enterprises make their biggest mistake: they build capability #1 as a standalone project. It works, it delivers value, and then capability #2 starts from scratch. New data pipeline, new integration pattern, new governance framework.
The alternative is building your first capability and the foundation simultaneously. The document processing pipeline that serves claims intelligence also serves fraud detection. The knowledge base that powers advisory AI also powers onboarding. The governance framework that covers one model covers twenty.
This costs more upfront. It saves dramatically more over time.

The Question to Ask Your Board

The next time someone says "we need an AI strategy," redirect with a better question:
"What are the three most expensive knowledge problems in our business, and which one has the cleanest data?"
That's not an AI strategy. It's better. It's a strategy that might use AI, or might use process redesign, or might use both. And it will produce value regardless of which model OpenAI releases next month.
But we need some kind of AI strategy for the board - what do we present?
Present a knowledge strategy with AI as the primary tool. Frame it as: "Here are our highest-value knowledge problems, here's our data readiness assessment, and here's a phased roadmap to solve them, starting with a discovery sprint on the top opportunity." That's more credible than "we're going to use LLMs."
What if our competitors are already deploying AI?
Most enterprise AI deployments are demos or pilots that haven't reached production. The competitive advantage goes to whoever ships production AI that delivers measurable value, not whoever announces a "strategic AI partnership" first. Speed to production beats speed to press release.
How long should an AI strategy take to develop?
A problem-mapping exercise takes days, not months. A focused discovery sprint takes 4-6 weeks. You can have a clear, actionable AI roadmap in 8 weeks, compared to the 6-12 months most "AI strategy" projects consume.