You can have the best AI strategy in the country and the most capable models on the market. Without the right team structure, you'll produce impressive demos that never reach production. After two years of enterprise AI delivery, here's what we've learned about building teams that actually ship.
What You Need to Know
- The most common AI team mistake is hiring data scientists and expecting products. Data science, AI engineering, and product development are different disciplines with different skills.
- Every successful AI team we've seen has four roles covered: domain expertise, AI/ML engineering, product/UX, and someone who owns adoption.
- Small, cross-functional teams outperform large, siloed teams for AI delivery. 4-8 people with the right mix beats 20 people in separate departments.
- The hardest role to fill isn't the ML engineer. It's the person who bridges business and technology. Call them a product manager, a technical BA, or an AI translator. Without them, the team builds the wrong thing.
4-8
people is the optimal AI delivery team size for a single capability, based on our delivery experience
Source: RIVER, enterprise AI delivery data, 2023-2024
The Four Roles That Matter
You don't need all of these to be full-time dedicated people. In smaller organisations, one person might cover two roles. What matters is that every role is covered, not that you have a large headcount.
1. Domain Expert
The person who knows the business problem intimately. In a claims processing AI project, that's a senior claims handler. In a compliance AI, that's a compliance officer. They define "good enough," validate outputs, and catch the errors that technical people miss.
Why they're essential: AI teams without domain expertise build technically interesting solutions to the wrong problems. Or they build solutions that work on test data and fail on real-world edge cases that any domain expert would have flagged in week one.
Common mistake: Treating domain expertise as a part-time consultation rather than embedded team participation. The domain expert needs to be reviewing AI outputs weekly, not quarterly.
2. AI/ML Engineer
The person who integrates models, builds pipelines, engineers prompts, and handles the technical infrastructure. This isn't a data scientist role (though data science skills are useful). It's an engineering role focused on building reliable, production-grade AI systems.
Why they're essential: Models don't put themselves into production. The gap between a working prototype and a production system is enormous: error handling, monitoring, scaling, security, testing, and integration with existing systems.
Common mistake: Hiring a data scientist when you need an engineer. Data scientists explore and experiment. Engineers build and ship. You need both at different stages, but if you're trying to get to production, engineering skills matter more.
3. Product/UX
The person who designs the interface and the user experience. In AI products, this role is more important than in traditional software because the interaction patterns are less established, user trust is lower, and the consequences of poor UX are severe (abandonment, not just frustration).
Why they're essential: The best AI with bad UX doesn't get used. Enterprise users are pragmatic. If the AI tool is harder to use than the manual process, they'll stick with the manual process regardless of how accurate the AI is.
Common mistake: Adding UX at the end, after the AI works. By then, the interaction model is baked into the technical architecture and changing it is expensive.
4. Adoption Owner
The person responsible for making sure people actually use the thing. This might be a change manager, a project manager with change skills, or a product manager who owns outcomes not just features.
Why they're essential: Because the build is half the job. Getting an AI system into daily use by the target users requires training, communication, feedback loops, and iterative improvement based on real usage patterns.
Common mistake: Assuming adoption happens because the tool is good. It doesn't. Every AI deployment needs an explicit adoption plan.
Team Structures That Work
The Embedded Pod
A small cross-functional team (4-8 people) embedded within the business unit they're building for. They sit with the users, share context, and iterate rapidly. This works best for focused AI capabilities within a single department.
The Platform Team + Feature Teams
A central platform team builds shared AI infrastructure (model hosting, data pipelines, monitoring, governance). Feature teams build specific AI capabilities on top of the platform. This works for organisations building multiple AI capabilities across departments.
The Partner-Augmented Team
Internal team owns the domain expertise and adoption. External partner provides AI engineering and accelerates delivery. This works for organisations that can't hire a full AI team but have strong internal domain knowledge.
This is the model we use most often at RIVER. The client brings the domain expertise and organisational context. We bring the AI engineering, architecture, and delivery methodology. The combined team delivers faster than either could alone.
The Team Health Check
Can your AI team answer these four questions: "What business problem are we solving?" (domain), "How are we building it?" (engineering), "How will users interact with it?" (product/UX), and "How will we get people to use it?" (adoption). If any question draws a blank, you have a gap.
Scaling the Team
Start small. A single AI capability doesn't need 20 people. It needs 4-8 good people with the right mix. Scaling happens by adding capabilities, not by growing the team for a single capability.
The platform team model (shared infrastructure, multiple feature teams) scales well. Each new capability reuses the platform and adds 2-4 people for the feature work. This is the compound model: the infrastructure investment pays off more with each additional capability.
