If you're a CEO reading one document about AI before your 2025 planning cycle, make it this one. We've distilled what we've learned across dozens of enterprise AI engagements into the decisions, budgets, and structures that separate organisations who get value from AI from those who spend money on it.
What You Need to Know
- 2025 is the year AI stops being optional. The enterprises that built foundations in 2024 will compound their advantage. Those still in pilot mode will fall further behind, and the gap accelerates.
- Budget 2-5% of revenue for AI capability across discovery, build, and ongoing operations. Under-investing guarantees you stay in pilot mode. Over-investing without governance guarantees waste.
- You need three roles, not thirty. An AI lead (strategy and prioritisation), a technical lead (architecture and delivery), and an AI governance owner (risk and compliance). Build from there.
- Governance is not a blocker. It's an accelerator. Organisations with governance frameworks in place deploy AI faster because they've already answered the hard questions.
- Measure business outcomes, not AI metrics. Model accuracy doesn't matter if the capability doesn't change how work gets done. Track time saved, decisions improved, and revenue influenced.
73%
of enterprise AI pilots never reach production
Source: McKinsey & Company, The State of AI in 2024
4×
faster deployment for organisations with established AI governance frameworks
Source: Gartner, AI Governance and Trust Survey 2024
The Strategic Decision: Foundation vs Projects
Before any other decision, you need to answer one question: are you building a foundation or running projects?
The project approach treats each AI capability as a standalone initiative: its own budget, team, infrastructure, and timeline. It's simpler to start but dramatically more expensive to scale. By your third project, you've built three separate data pipelines, three governance frameworks, and three integration patterns.
The foundation approach builds shared infrastructure with your first capability. Each subsequent capability is faster, cheaper, and more powerful because it inherits what came before. The first capability costs ~30% more. By the fourth, you're saving ~50% on total investment.
Our recommendation: If you're planning more than two AI capabilities over the next 18 months, invest in the foundation. The maths is unambiguous.
Budget Allocation: A Realistic Framework
Most enterprises get AI budgeting wrong in one of two ways: they dramatically under-fund (expecting a $50K pilot to deliver enterprise transformation) or they over-allocate to technology while under-funding the people and process changes that determine whether AI delivers value.
The Three Budget Categories
1. Discovery (15-20% of total AI budget)
Discovery is where you identify the right problems to solve, validate feasibility, and build the business case. Skipping this step is the single most expensive mistake in enterprise AI.
| Activity | Typical Investment | Timeline |
|---|---|---|
| AI readiness assessment | $10-30K | 1-3 weeks |
| Use case identification and prioritisation | $20-40K | 2-4 weeks |
| Technical feasibility assessment | $10-30K | 1-3 weeks |
| Business case development | Included above | - |
Budget $50-100K for a thorough discovery phase. This isn't overhead. It's insurance against building the wrong thing.
2. Build (50-60% of total AI budget)
This is the actual development, integration, and deployment of AI capabilities. Costs vary significantly based on complexity, but benchmarks help.
| Capability Complexity | Investment Range | Timeline |
|---|---|---|
| Single process automation (document classification, data extraction) | $80-200K | 8-14 weeks |
| Multi-step workflow (claims processing, contract review) | $150-400K | 12-20 weeks |
| Organisation-wide platform (AI foundation + 2-3 capabilities) | $300K-1M+ | 6-12 months |
Critical: These numbers include integration with existing systems, which typically accounts for 60-70% of the build effort. If a vendor quotes you only for the AI model, they're quoting 30% of the real cost.
3. Operate (25-30% of total AI budget, ongoing)
AI systems require ongoing investment. Models need monitoring and retraining. Governance needs to evolve with regulation. Users need support as workflows change.
| Activity | Typical Annual Cost |
|---|---|
| Model monitoring and maintenance | $3-8K/month |
| Governance and compliance | $2-5K/month |
| User support and change management | $2-4K/month |
| Infrastructure and hosting | $1-5K/month |
Plan for $100-250K annually in operational costs for a meaningful AI programme. This is often the most under-budgeted category.
The Budget Ratio Test
If your AI budget is more than 70% technology and less than 30% people/process, rebalance. Technology is the smallest determinant of AI success. People adopting AI-augmented workflows - that's where the value lives.
Team Structure: Start Small, Scale Deliberately
You don't need a 20-person AI team to start. You need three roles filled well.
The Founding Three
AI Lead (strategic) - Owns the AI roadmap, prioritises use cases, manages stakeholder expectations, and ensures alignment between AI initiatives and business strategy. This person reports to you. They don't need to be a data scientist. They need to understand both technology and business deeply enough to make trade-off decisions.
Technical Lead (architecture) - Owns the technical architecture, makes build-vs-buy decisions, manages vendor relationships, and ensures the AI foundation is built for scale. This is your most expensive hire and your most important one.
Governance Owner (risk) - Owns the AI governance framework, ensures compliance with internal policies and emerging regulation, manages data sovereignty and privacy requirements. In many NZ organisations, this role can be combined with existing risk or compliance leadership, but they need dedicated time for AI.
Scaling the Team
| Stage | Team Size | Focus |
|---|---|---|
| Discovery (months 1-3) | 3 core + external partner | Readiness assessment, use case identification |
| Foundation (months 3-9) | 3-5 internal + delivery partner | First capability, shared infrastructure |
| Scale (months 9-18) | 5-10 internal | Additional capabilities, internal capability building |
| Mature (18+ months) | 8-15 internal | Self-sufficient delivery, partner for specialist needs |
Key principle: Use an external delivery partner to accelerate early stages while building internal capability in parallel. The goal is self-sufficiency by month 18, not permanent dependence on a vendor.
The organisations that succeed with AI aren't the ones with the biggest teams. They're the ones with the right three people making the first decisions, and a realistic plan for building capability over time.
Dr Tania Wolfgramm
Chief Research Officer
Governance: The Accelerator You're Probably Ignoring
AI governance is consistently misunderstood by executive teams. It's not a compliance burden. It's the mechanism that allows you to move fast with confidence.
The Minimum Governance Framework
Your 2025 AI governance framework needs five elements:
-
Use case approval process. How are new AI use cases evaluated, approved, and prioritised? Who has authority to greenlight? What criteria determine go/no-go?
-
Data governance. Where does training and operational data come from? Who owns it? What are the sovereignty requirements? How is personal information handled?
-
Risk classification. Not all AI carries equal risk. A document classification tool has different governance needs than a customer-facing decision system. Classify use cases by risk tier and apply proportionate controls.
-
Monitoring and audit. How do you know the AI is performing as expected? What triggers a review? Who reviews model outputs and on what cadence?
-
Incident response. When the AI gets it wrong (and it will), what happens? Who's responsible? How is the affected party notified? What's the escalation path?
34%
of NZ/AU enterprises have a formal AI governance framework
Source: NZTech, AI Readiness in Aotearoa 2024
The 34% who have governance frameworks are deploying AI faster than the 66% who don't. Governance answers the questions that otherwise stall every project at the approval stage.
Realistic Timelines: What to Expect
Enterprise AI doesn't deliver overnight. Here's what a realistic 2025 timeline looks like:
Q1 2025: Discover and assess. Complete AI readiness assessment. Identify and prioritise top 3-5 use cases. Select delivery approach (build, buy, partner). Establish governance framework.
Q2 2025: Build the foundation. Develop first AI capability on shared infrastructure. Integrate with core systems. Train affected teams. Begin measuring outcomes.
Q3 2025: Compound. Deploy second capability on the foundation. Measure compound advantage (should be 30-40% faster than capability #1). Refine governance based on real-world experience.
Q4 2025: Scale and plan. Third capability live. Internal team taking over operational responsibility from delivery partner. AI roadmap for 2026 based on measured results.
The trap to avoid: Trying to compress this into 6 months. Discovery gets skipped, foundations don't get built, and you end up with a pilot that never scales. We've seen this pattern repeatedly.
What to Measure: The CEO Dashboard
Forget model accuracy, F1 scores, and inference latency. Those are technical metrics for your technical team. As CEO, you need five numbers:
| Metric | What It Tells You | Target |
|---|---|---|
| Time to value | How long from decision to deployed capability | Decreasing with each capability |
| Process efficiency gain | Time/cost saved in AI-augmented workflows | 30-70% depending on use case |
| Adoption rate | Percentage of target users actively using AI capabilities | >70% within 3 months of deployment |
| Foundation reuse | Percentage of new capability built on existing infrastructure | >50% by capability #2 |
| Governance incidents | Number of AI-related risk events requiring escalation | Decreasing over time |
The Adoption Metric
Adoption rate is the most important metric on this list. An AI capability with 95% accuracy and 20% adoption delivers less value than one with 85% accuracy and 90% adoption. Redesigning work around AI is where the value is captured.
The 2025 Decision Matrix
To bring this together, here's how to sequence your decisions:
Decision 1 (October-November 2024): Commit to an AI strategy. Allocate discovery budget. Assign executive sponsor.
Decision 2 (December 2024-January 2025): Based on discovery findings, approve AI investment. Choose foundation vs project approach. Select delivery partner.
Decision 3 (Q1 2025): Approve first capability build. Establish governance framework. Begin hiring or upskilling the founding three.
Decision 4 (Q2-Q3 2025): Based on first capability results, approve scale investment. Transition operational responsibility. Plan capability #2.
Each decision is informed by the previous one. The total investment builds progressively. You're not committing $1M on day one. You're committing $50-100K for discovery, then making informed decisions at each stage.
The Cost of Waiting
The compound advantage means delay has compound cost. Every quarter you wait, the organisations that started earlier add another capability to their foundation. By mid-2025, the gap between foundation builders and pilot experimenters will be visible in competitive dynamics.
AI is not a technology decision you can delegate. It's a strategic decision that requires CEO engagement, realistic budgets, and organisational commitment. The organisations that get this right in 2025 will define the competitive landscape for the rest of the decade.
The question isn't whether your organisation will use AI. It's whether you'll build the foundation to use it well.

