Skip to main content

The AI Maturity Curve: Where NZ Enterprises Actually Sit

NZ enterprise AI maturity: where organisations sit on the curve and what the next steps look like for each stage. Honest assessment, practical guidance.
26 January 2026·8 min read
Isaac Rolfe
Isaac Rolfe
Managing Director
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
We published The Enterprise AI Maturity Model last year. Since then, we've applied it across dozens of NZ organisations. The picture is clearer now, and more honest. Most NZ enterprises are earlier on the curve than they believe. That's not a criticism. It's a starting point.

The Distribution

Based on our assessments and market analysis, here's how NZ enterprise AI maturity distributes in early 2026:
NZ Enterprise AI Maturity Distribution
Forty percent of NZ enterprises sit at the Foundational level. They have some AI awareness, possibly some individual tool usage (ChatGPT, Copilot), but no enterprise strategy, no shared infrastructure, and no production AI capabilities. This is normal. It's also the stage where intervention has the highest ROI, because the right foundation decisions now prevent expensive corrections later.
Thirty percent are Developing. They've run pilots, possibly deployed one or two AI capabilities, and have some organisational awareness of what AI can do. The challenge at this stage is moving from isolated successes to a coherent platform approach.
Twenty percent are Established. These organisations have multiple AI capabilities in production, some shared infrastructure, and a governance framework. They're the ones seeing compound returns and pulling ahead. Most are in financial services, health, or large government agencies.
Ten percent are Leading. AI is embedded in core operations, the platform is mature, and new capabilities deploy in weeks rather than months. These organisations are rare in NZ, but they exist, and they're setting the benchmark.

What Each Stage Looks Like

Foundational (40%)

What we see:
  • Individual employees using ChatGPT, Copilot, or similar tools informally
  • No enterprise AI strategy or roadmap
  • Data quality and accessibility issues that would block AI deployment
  • Limited understanding of AI capabilities at leadership level
  • No dedicated AI budget or resourcing
What holds organisations here:
  • Competing priorities and limited budgets
  • Lack of internal AI expertise to evaluate opportunities
  • Uncertainty about where to start
  • Data infrastructure that isn't ready for AI workloads
The next step: AI Discovery. A structured sprint to identify high-value opportunities, assess data readiness, and build a prioritised roadmap. This doesn't require a large investment, and it replaces uncertainty with a clear plan.

Developing (30%)

What we see:
  • One or two AI pilots or production capabilities
  • Some internal AI expertise, often concentrated in a single team
  • Growing awareness of the platform vs project distinction
  • Data quality improvements underway but not complete
  • An AI strategy document that may or may not reflect reality
What holds organisations here:
  • Pilot-to-production gap: the demo works but production deployment stalls
  • No shared AI infrastructure, so each project rebuilds foundations
  • Governance is informal or document-based rather than embedded in systems
  • The business case for platform investment isn't yet proven internally
The next step: Build the first production capability on a shared foundation. The goal is to prove that the platform approach works by delivering the first capability with infrastructure that the second capability can reuse. This is the critical transition.

Established (20%)

What we see:
  • Multiple AI capabilities in production
  • Shared infrastructure (data pipelines, model management, governance)
  • Dedicated AI operations capability (even if small)
  • Measurable compound returns: each new capability is faster and cheaper
  • Board-level visibility of AI value and risk
What holds organisations here:
  • Scaling challenges: the platform works but isn't designed for the next ten capabilities
  • AI operations maturity: monitoring, evaluation, and continuous improvement aren't systematic
  • Talent: maintaining and growing AI capability requires people that NZ's market doesn't have enough of
  • Cross-functional integration: AI is still owned by IT or digital, not embedded in business operations
The next step: Operationalise. Move from "we build AI" to "we run AI." Invest in monitoring, evaluation, and continuous improvement. Embed AI operations as a discipline alongside security, data management, and infrastructure operations.

Leading (10%)

What we see:
  • AI embedded in core business operations
  • Mature AI platform with sub-week deployment for new capabilities
  • AI governance as code: automated, auditable, continuous
  • AI literacy across the organisation, not just the technical team
  • Data-driven decision-making about AI investment and priorities
What distinguishes this stage: it's not about having more AI. It's about AI being a normal part of how the organisation works. The same way cloud computing is no longer a separate initiative, AI at this stage is just how things are done.

The Self-Assessment Problem

The most consistent finding across our maturity assessments: organisations overestimate their position. Enterprises that self-assess at Developing are often Foundational. Those that self-assess at Established are often Developing.
The reason is that AI maturity isn't about having AI. It's about having the organisational capability to deploy, operate, and govern AI systematically. Having a chatbot in production doesn't make you Established. Having a platform that enables the chatbot, the document processor, and the compliance checker, with shared governance and monitoring, is what Established looks like.
1-2 levels
typical gap between enterprise self-assessment and independent AI maturity assessment
Source: RIVER Group maturity assessments, 2025-2026

NZ-Specific Dynamics

Three factors make NZ's maturity curve distinct from global patterns:
Scale. NZ enterprises are smaller, which means AI platform investment is proportionally larger relative to revenue. The economics favour partnerships over build-from-scratch approaches.
Talent. The AI talent pool in NZ is small and concentrated. Most Foundational and Developing organisations can't hire the expertise they need. Partnering isn't optional; it's the realistic path.
Trust. NZ business culture is relationship-driven. Enterprise AI adoption often follows trusted recommendations rather than vendor marketing. This slows initial adoption but improves quality, because organisations are more likely to choose approaches that actually work.

Practical Next Steps

Regardless of where your organisation sits:
  1. Get an honest assessment. Self-assessment is unreliable. Have someone external evaluate your maturity against objective criteria.
  2. Focus on the next level, not the top. Moving from Foundational to Developing delivers more value than aspirational plans to become Leading.
  3. Invest in data readiness. At every stage, data quality and accessibility are the most common blockers. Fix the data problems and AI becomes dramatically easier.
  4. Choose the platform path early. Even at Foundational stage, the decisions you make about your first AI deployment shape whether you build compound value or isolated capabilities.

The maturity curve isn't a competition. It's a map. Know where you are, understand what the next stage requires, and invest accordingly. The organisations that get this right in 2026 will be the ones that are honest about their starting point.