Skip to main content

The Product-Led AI Approach

AI as product, not project. What changes when you treat AI capabilities as products with users, not projects with deadlines.
20 June 2025·8 min read
Rainui Teihotua
Rainui Teihotua
Chief Creative Officer
Isaac Rolfe
Isaac Rolfe
Managing Director
Every enterprise AI project we have seen fail had the same structure: a project with a deadline, a deliverable, and a handoff. Every one that succeeded was treated as a product with users, feedback loops, and continuous improvement. The distinction sounds semantic. It is structural, and it changes everything.

The Project Trap

The project model works for deterministic technology. Build a system, test it, deploy it, hand it over. The system does what it was specified to do, and maintenance is about keeping it running, not about making it smarter.
AI does not work this way. AI capabilities improve with use, degrade with neglect, and require continuous adjustment as data changes, models update, and users discover new edge cases. Treating AI as a project means declaring victory at the worst possible moment: when the system is deployed but not yet mature.
We see this pattern constantly. An organisation runs an AI project. Twelve weeks. Nice demo. Deployed to production. Project complete. The team disbands. Six months later, the AI is producing worse results than at launch because nobody is monitoring drift, updating prompts, or responding to user feedback. The organisation concludes that AI does not work.
AI worked fine. The project model failed it.

What Product Thinking Changes

Users, Not Stakeholders

Project thinking has stakeholders who approve requirements and sign off on deliverables. Product thinking has users who interact with the system daily and whose behaviour reveals what works and what does not.
When you treat AI as a product, you watch how people actually use it. You notice that the claims assessors trust the AI's risk scoring but ignore its category suggestions. You notice that the customer service team uses AI for drafting responses but rewrites them 80% of the time. These patterns are signals. In a project model, they are invisible because the project is already complete.

Feedback Loops, Not Sign-Offs

Projects have milestones and sign-offs. Products have feedback loops and iteration cycles.
An AI product has a continuous feedback mechanism: user corrections, satisfaction ratings, usage patterns, and direct feedback. This data feeds into prompt improvements, model selection changes, retrieval optimisation, and UI refinements. The product gets better with use.
A project-model AI has none of this. The prompts are whatever was written at development time. The model is whatever was current at launch. The UI is whatever was designed before users interacted with it.
The best AI interface we built looked nothing like the one we designed. It evolved through twelve weeks of user feedback, each iteration revealing something we could not have anticipated in a requirements document.
Rainui Teihotua
Chief Creative Officer

Continuous Improvement, Not Maintenance

Project maintenance is about keeping the lights on. Product improvement is about making the system better.
AI products need active improvement: refining prompts based on error patterns, updating retrieval pipelines as knowledge changes, adjusting model routing as new models become available, redesigning interfaces as users reveal what they actually need.
This is not maintenance. It is product development. It requires a dedicated team, a roadmap, and ongoing investment. Organisations that budget for "AI maintenance" get systems that slowly degrade. Organisations that budget for "AI product development" get systems that continuously improve.

The Product Model for AI

Dedicated Product Team

An AI product needs an owner and a team. Not a project manager who will move on. A product owner who lives with the product, understands its users, and is accountable for its outcomes.
The team does not need to be large. A product owner, an engineer, and regular access to design and domain expertise is enough for most internal AI products. But they need to be dedicated, not borrowed from other projects.

Usage Analytics

You cannot improve what you do not measure. An AI product needs usage analytics that go beyond "how many requests per day" to "how are people using the outputs?"
Key metrics: adoption rate, correction rate (how often users modify AI outputs), completion rate (how often users accept the AI's work without modification), abandonment rate (how often users start an AI-assisted task and revert to manual), and satisfaction (direct user feedback).

Iteration Cadence

Set a regular improvement cadence. Weekly prompt reviews based on error patterns. Monthly model evaluation against new options. Quarterly UX improvements based on usage data.
This cadence keeps the product improving at a sustainable pace. Without it, improvements are ad hoc and reactive, only happening when something breaks badly enough to get attention.

Sunset Criteria

Not all AI products should live forever. Define criteria for when to sunset an AI capability: low adoption after a meaningful trial period, consistently poor quality that does not improve with iteration, or a change in business needs that makes the capability irrelevant.
Sunsetting a failing AI product is better than maintaining it. A poorly performing AI product actively damages trust in AI across the organisation.

The Business Case

Product thinking costs more upfront. You are funding an ongoing team, not a one-time project. But the economics work in its favour:
Higher adoption. Products that improve based on user feedback get used. Projects that ship and stagnate get abandoned.
Lower total cost. A $200K AI project that nobody uses twelve months later has zero ROI. A $300K AI product (including twelve months of development) with 80% adoption and measurable business impact has positive ROI.
Compounding value. AI products that improve over time deliver increasing value. AI projects that stagnate deliver decreasing value as the organisation changes around them.
Organisational learning. Every iteration of an AI product teaches the organisation something about how AI works in their context. This learning compounds across products. The second AI product is cheaper and better because of what the first one taught you.

Getting Started

If you have existing AI projects, converting to a product model is straightforward:
  1. Assign an owner. Someone who is accountable for the product's outcomes, not just its uptime.
  2. Instrument usage. Start measuring how people actually use the AI, not just whether it is running.
  3. Establish a feedback loop. Make it easy for users to flag issues and suggest improvements.
  4. Set an iteration cadence. Weekly reviews, monthly improvements, quarterly strategy.
  5. Budget for improvement. Allocate ongoing resources for product development, not just maintenance.
The shift from project to product thinking is the single most impactful change most organisations can make in their AI strategy. It does not require new technology. It requires a different mindset about what AI is: not a deliverable, but a capability that grows.