Skip to main content

The 2026 AI Budget Playbook: From Experimental to Operational

AI budgets in 2026 should look nothing like 2024. How to structure AI spending for compound returns, with benchmarks from NZ and Australian enterprises.
5 December 2025·10 min read
Isaac Rolfe
Isaac Rolfe
Managing Director
If your 2026 AI budget looks like your 2024 AI budget with a bigger number, you're doing it wrong. The shift from experimental to operational AI changes everything about how you should allocate resources: what you spend on, how you structure it, and what return you should expect. Here's the playbook.

What You Need to Know

  • 2026 AI budgets should shift from "build" to "build and operate." Operational costs (monitoring, maintenance, governance, improvement) should now represent 30-40% of total AI spend. Most organisations still under-budget operations.
  • Platform spending should exceed capability spending. If you're spending more on individual AI capabilities than on shared infrastructure, you're guaranteeing each capability costs as much as the first. The compound advantage requires platform investment.
  • The infrastructure dividend is real. Organisations that invested in AI foundations in 2024-2025 are now deploying new capabilities at 40-60% lower cost. Budget for this dividend. It changes the ROI calculation for every subsequent capability.
  • AI talent budgets should shift from specialist hiring to broad upskilling. The bottleneck isn't data scientists. It's AI-literate business teams. Allocate training budget across the organisation, not just the AI team.
  • Benchmark: NZ enterprises with mature AI programmes are spending 2-4% of revenue on AI capability. Below 1% almost guarantees you stay in pilot mode. Above 5% without governance almost guarantees waste.
2-4%
of revenue spent on AI by NZ enterprises with mature AI programmes
Source: RIVER Group, enterprise engagement data, 2024-2025

The Budget Structure Shift

2024: The Experimental Budget

Most NZ enterprises in 2024 budgeted AI like a technology experiment:
Category% of AI BudgetPurpose
Pilot/POC40-50%Prove AI works
Technology (licences, compute)30-40%Tools and infrastructure
Consulting/delivery15-20%External help
Operations0-5%Afterthought
This structure makes sense when you're testing whether AI delivers value. It doesn't make sense when you already know it does.

2026: The Operational Budget

For organisations moving from experimental to operational AI, the budget should restructure around sustained value delivery:
Category% of AI BudgetPurpose
Platform infrastructure25-30%Shared foundation, data pipelines, knowledge bases
Capability delivery25-35%New AI capabilities built on the platform
Operations and improvement25-30%Monitoring, maintenance, model updates, continuous improvement
People and literacy10-15%Training, upskilling, change management
Discovery5-10%Identifying and validating next capabilities
The critical shifts: platform infrastructure is now a first-class budget line. Operations consumes a quarter of the budget. Discovery shrinks because you've already identified your high-value use cases.

Budgeting by Maturity Stage

Stage 1: Getting Started (First Year of AI)

Total budget benchmark: $150-400K
Line ItemInvestmentNotes
AI discovery$30-80KMap opportunities, assess readiness
First capability (on foundation)$120-250KBuild shared infrastructure with first use case
Governance setup$20-40KFramework, policies, monitoring
Team upskilling$10-30KLeadership AI literacy
Key principle: Invest 20-30% more in your first capability to build it as a foundation, not a standalone. This premium funds the infrastructure that makes everything after it cheaper.

Stage 2: Scaling (Year 2-3)

Total budget benchmark: $400K-$1.2M annually
Line ItemInvestmentNotes
Platform development$100-250KExtend foundation, improve data pipelines
New capabilities (2-3 per year)$150-400KBuilt on foundation at reducing cost
Operations$80-200KMonitoring, maintenance, improvement
People and change$50-150KBroader upskilling, AI product owners
Discovery$30-60KOngoing opportunity identification
Key principle: Each new capability should cost less than the last. If it doesn't, your platform isn't compounding. Investigate why.

Stage 3: Operational (Year 3+)

Total budget benchmark: $600K-$2M+ annually
Line ItemInvestmentNotes
Platform evolution$150-400KArchitecture updates, new model capabilities, agentic workflows
New capabilities (3-5 per year)$200-500KRapid deployment on mature foundation
Operations$150-400KLarger portfolio of capabilities to maintain
People and literacy$80-200KDistributed AI competency across organisation
Innovation and experimentation$50-150KExploring emerging capabilities
Key principle: The ratio of operations to new build should increase over time. A healthy mature programme spends as much maintaining and improving existing capabilities as building new ones.
The Infrastructure Dividend
Track your "cost per capability" over time. If your third capability costs 60% of your first, you've captured the infrastructure dividend. Report this to the board. It's the best evidence that platform investment is paying off.

Five Common Budgeting Mistakes

1. Budgeting AI as a One-Off Project

AI capabilities need ongoing investment. A model deployed in January will degrade by June without monitoring, retraining, and maintenance. Budget for the full lifecycle, not just the build.

2. Under-Budgeting Integration

Integration with existing systems (CRM, ERP, document management, email) typically accounts for 50-70% of capability delivery cost. If your budget only covers the AI model, you've budgeted 30% of the real cost.

3. Ignoring Change Management

An AI capability with 95% accuracy and 20% adoption delivers less value than one with 85% accuracy and 90% adoption. Budget for training, communication, workflow redesign, and the ongoing support that drives adoption. Allocate 10-15% of each capability budget to change management.

4. Splitting Budget Across Too Many Initiatives

Three well-funded capabilities on a shared platform deliver more value than eight underfunded experiments. Concentrate investment. Kill pilots that aren't on a path to production.

5. Not Budgeting for Governance

Governance isn't optional, and it isn't free. Budget explicitly for governance framework development, ongoing compliance monitoring, and the people who manage it. The good news: governance investment compounds. What's expensive for capability #1 is nearly free for capabilities #2-4.

Benchmarks: NZ and AU Enterprises

Based on our engagements across NZ and Australian enterprises in 2024-2025:
MetricNZ MedianAU MedianTop Quartile (Both)
AI spend as % of revenue1.5%2.1%3-4%
First capability cost$150K$220K$120K (foundation reuse)
Annual operational cost$120K$180K$200-350K
Capabilities in production234-6
Cost reduction per capability (foundation)35%38%50%+
Time to deploy new capability14 weeks12 weeks6-8 weeks
NZ enterprises typically spend less in absolute terms but achieve comparable outcomes per dollar, largely because smaller scale means simpler integration and faster decision-making.
14 weeks
median time to deploy an AI capability for NZ enterprises in 2025
Source: RIVER Group, enterprise engagement data, 2024-2025

The Board Conversation

When presenting your 2026 AI budget to the board, lead with three numbers:
  1. Total AI programme investment (not per-project). Boards that evaluate AI project-by-project miss the compound picture.
  2. Cost per capability trend. Show the declining cost curve as your foundation matures.
  3. Operational ROI - measured in business outcomes (time saved, decisions improved, revenue influenced), not AI metrics.
The most effective board AI conversations we've seen frame AI spending as a capability investment with compound returns, not as a technology cost with uncertain payoff. Show the compound effect in your own data.
What if we're just starting - should we budget for a full platform?
No. Budget for your first capability built on foundational infrastructure. The platform emerges from real capabilities, not from a separate platform project. The 20-30% premium you pay on capability #1 to build it as reusable infrastructure is your platform investment. Scale from there.
How do we justify AI operational costs to the board?
Frame it as you would any operational capability. You don't question the annual cost of running your CRM or ERP. AI systems require the same ongoing investment. The difference: AI systems that are well-maintained improve over time, while most traditional systems depreciate.
Should we use an external partner or build an internal team?
Both, but phase it. Use an external delivery partner for the first 1-2 capabilities while building internal capability in parallel. By capability #3-4, your internal team should own operations and increasingly own delivery. The goal is self-sufficiency by year 2, not permanent dependence.