Skip to main content

The AI Operating Model: From Projects to Continuous Capability

Enterprises that treat AI as a series of projects will always be slower than those that build an AI operating model. The organisational design that makes AI compound.
15 April 2025·12 min read
Tim Hatherley-Greene
Tim Hatherley-Greene
Chief Operating Officer
Isaac Rolfe
Isaac Rolfe
Managing Director
Most enterprises have an AI strategy. Fewer have an AI operating model. The difference matters: a strategy says what you want to achieve with AI. An operating model says how your organisation continuously delivers, governs, and evolves AI capabilities. Without the operating model, the strategy is a document that gathers dust.

What You Need to Know

  • An AI operating model is the organisational structure, governance, processes, and infrastructure that enable an enterprise to continuously build and scale AI capabilities, not just deliver individual AI projects.
  • The shift from "AI projects" to an "AI operating model" is the single most impactful organisational change most enterprises can make. It's the difference between linear progress and compound value.
  • The operating model has four pillars: delivery structure, governance framework, platform infrastructure, and capability evolution. Weakness in any one pillar creates a bottleneck.
  • You don't need to build the full operating model before starting. Start with the minimum viable version and mature it as your AI capabilities grow. But you do need to start deliberately. It won't emerge on its own.
  • The enterprises that build operating models in 2025 will have a structural advantage by 2027 that's nearly impossible to replicate quickly.
72%
of enterprises report their AI initiatives are managed as isolated projects with no shared infrastructure
Source: McKinsey, The State of AI in Early 2025, March 2025

Why Projects Don't Scale

The project model works like this: a business unit identifies an AI opportunity. They get budget approval. They hire a vendor or form a team. They build the thing. It goes live. The team disbands or moves on.
This works for project #1. By project #5, you've built five separate data pipelines, five sets of integration code, five governance frameworks, and five teams that learned the same lessons independently. You've spent 3-4x what you needed to and built nothing that compounds.
The project model has three structural problems:
No shared learning. Each project team discovers the same challenges (data quality, integration complexity, governance requirements) and solves them independently. The fifth team is no smarter than the first.
No shared infrastructure. Each project builds its own foundation. The document processing pipeline built for claims doesn't benefit procurement. The knowledge base built for customer service doesn't help compliance.
No velocity gain. Project #5 takes as long as project #1. There's no acceleration because nothing carries forward. Every initiative starts from zero.
The test is simple: is your fifth AI capability significantly faster and cheaper than your first? If not, you're running projects, not building a platform.
Isaac Rolfe
Managing Director

The Four Pillars

1. Delivery Structure

The delivery structure answers: who builds AI capabilities, and how do they work together?
The hub-and-spoke model is what works for most enterprises. A central AI platform team (the hub) owns shared infrastructure, governance, and standards. Domain teams (the spokes) own specific AI capabilities within their business areas.
RoleCentral TeamDomain Teams
OwnsPlatform, infrastructure, standards, governanceBusiness-specific AI capabilities
BuildsShared data pipelines, knowledge bases, integration frameworksUse-case-specific models, workflows, interfaces
MaintainsAI development environment, monitoring, securityDomain data quality, user training, business outcomes
Size3-8 people (scales with maturity)1-3 people per active domain
The central team doesn't build every AI capability. They build the platform that makes every capability possible, and they support domain teams in delivering capabilities faster.
The critical hire: The AI platform lead. This person sits between technology and business, understands both infrastructure and use cases, and translates between engineering teams and executive stakeholders. This role is more important than any individual data scientist or ML engineer.

2. Governance Framework

Governance answers: how do we make decisions about AI safely and quickly?
The operating model approach to governance is fundamentally different from project-based governance. In a project model, governance is applied per-project. Each initiative goes through its own risk assessment, approval process, and compliance review. This is slow and inconsistent.
In an operating model, governance is codified into tiers:
Tier 1: Pre-approved patterns. Common AI use cases (document summarisation, data extraction, search enhancement) that have been risk-assessed once and can be deployed by any domain team without additional approval. The central team maintains the approved pattern library.
Tier 2: Light review. AI capabilities that involve sensitive data or customer-facing outputs. Require review by the governance committee (monthly cadence) but follow established templates.
Tier 3: Full assessment. Novel AI applications, autonomous decision-making, or high-stakes domains. Full risk assessment, ethics review, and executive approval.
Most AI capabilities fall into Tier 1 or Tier 2. The governance framework accelerates these while maintaining rigour for the high-risk minority.

3. Platform Infrastructure

The platform answers: what shared infrastructure makes every AI capability faster?
This is the AI foundation, but framed as an operational concern, not a project deliverable. The platform includes:
Data infrastructure. Shared data pipelines, embedding pipelines, vector stores, and data quality monitoring. When a new AI capability needs access to document data, the pipeline already exists.
Integration framework. Pre-built connectors to core enterprise systems (CRM, ERP, document management, communication platforms). Each new capability reuses these connectors rather than building bespoke integrations.
AI orchestration layer. The coordination system that manages how models, tools, and data sources work together. This is the conductor that enables multi-step AI workflows.
Monitoring and observability. Centralised logging, performance monitoring, cost tracking, and drift detection across all AI capabilities. Problems surface quickly; improvements benefit everything.
4x
faster delivery of the fourth AI capability when built on shared platform infrastructure
Source: RIVER Group, enterprise engagement data, 2023-2025

4. Capability Evolution

Evolution answers: how do AI capabilities improve over time?
This is the pillar most enterprises miss entirely. In a project model, capabilities are "done" when they go live. In an operating model, capabilities continuously evolve:
Model updates. New model releases (Claude 4, GPT-5, open-source improvements) create opportunities to improve every existing capability. The operating model includes a process for evaluating and deploying model updates across the platform.
Data enrichment. As more capabilities are deployed, more data flows through the platform. This data improves existing capabilities. The claims processing system gets better as it processes more claims. The knowledge base gets richer as more sources are integrated.
Pattern propagation. When one domain team discovers a better approach (a more effective prompt pattern, a more reliable integration method), it propagates to all domain teams through the shared platform.
Feedback loops. User feedback on AI capabilities drives targeted improvements. The operating model includes structured feedback collection and prioritisation, not just bug reports, but "this would be more useful if..." insights.

Building the Operating Model

You don't build the full operating model on day one. You build it in stages, aligned with your AI maturity:

Stage 1: Foundation (Months 1-3)

  • Appoint AI platform lead
  • Build first AI capability with reusable infrastructure
  • Establish basic governance (AI usage policy, risk tiers)
  • Set up centralised monitoring

Stage 2: Structure (Months 4-6)

  • Form central platform team (2-4 people)
  • Enable first domain team to build on the platform
  • Codify Tier 1 pre-approved patterns
  • Implement feedback loops

Stage 3: Scale (Months 7-12)

  • Multiple domain teams building on the platform
  • Governance committee operating on monthly cadence
  • Platform infrastructure handling 80%+ of common requirements
  • Capability evolution process running quarterly

Stage 4: Maturity (Year 2+)

  • AI operating model is business-as-usual
  • New capabilities are delivered in weeks, not months
  • Governance is largely automated
  • The organisation has a structural advantage that compounds with every capability
The Operating Model Test
Ask three questions: (1) Can a new domain team build an AI capability without the central team doing most of the work? (2) Is your fifth capability significantly faster than your first? (3) Do improvements to one capability automatically benefit others? If yes to all three, you have an operating model. If not, you have a project team.

The Competitive Dynamic

The operating model creates a structural advantage that's difficult to replicate. An enterprise with a mature operating model can deploy a new AI capability in 3-4 weeks. A competitor starting from scratch needs 3-4 months for the same capability.
That gap compounds. By the time the competitor has their first capability live, the operating model enterprise is on their fifth, with each one being richer, better governed, and more integrated than anything built as a standalone project.
This is the AI equivalent of compound interest. The earlier you start building the operating model, the larger the advantage becomes.

The AI operating model isn't a luxury for enterprises that are "mature enough." It's the minimum viable structure for any organisation that plans to build more than two AI capabilities. Start small, build deliberately, and let the compound advantage do the rest.
How big does our organisation need to be for an AI operating model?
Any organisation building more than two AI capabilities benefits from an operating model. For a 200-person company, the "central team" might be one platform lead and one engineer. For a 10,000-person enterprise, it might be 8-12 people. The model scales. The principles don't change.
Should the AI platform team sit in IT or in the business?
Neither exclusively. The platform team should report to a technology leader (CTO, CIO, or CDO) but have a dotted line to business leadership. The worst outcome is an AI team that only speaks technology or only speaks business. The platform lead must bridge both.
What's the relationship between the AI operating model and our existing data team?
Complementary. The data team manages data infrastructure, quality, and governance. The AI platform team builds on that foundation, adding AI-specific pipelines, orchestration, and capabilities. In many enterprises, the AI platform team evolves from within the data team.