Skip to main content

From M&E to AI - Why Data People Make the Best AI Adopters

Monitoring and evaluation professionals have spent decades doing what AI promises to automate. That experience is an asset, not a redundancy.
5 December 2025·8 min read
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
Louise Epa
Louise Epa
AI Analyst & Research Consultant
There's a quiet panic running through the monitoring and evaluation community. AI can now do in minutes what used to take weeks: pattern recognition across large datasets, anomaly detection, trend analysis, automated reporting. For professionals who've built careers around these activities, the natural question is whether they're about to become redundant. They're not. They're about to become essential.

What You Need to Know

  • M&E professionals already practise the core disciplines AI adoption requires: defining success criteria before building, collecting data from messy real-world environments, evaluating whether interventions actually work, and translating findings for decision-makers
  • AI doesn't replace evaluation thinking - it makes it more important. Bad evaluation of AI outputs is worse than no AI at all
  • Louise's journey from building Samoa's national M&E framework to prototyping AI agents at RIVER demonstrates the direct transferability of these skills
  • Organisations that pair AI tools with M&E expertise get better outcomes than those that treat AI as a purely technical function

Te Pūtake - The Root Competency

I've been thinking about this through the lens of whakapapa - the layered connections between what came before and what comes next. M&E as a discipline has a whakapapa that runs directly into AI adoption. The skills aren't adjacent. They're ancestral.
Consider what a strong M&E professional does daily. They define what success looks like before an intervention begins. They build frameworks for collecting data from environments that are noisy, incomplete, and politically charged. They evaluate whether something actually worked, not just whether it produced outputs. And they present findings to decision-makers who need to act on them, often with incomplete information and competing priorities.
Every single one of those competencies maps onto AI adoption.

Defining Success Before Building

This is where most AI projects go wrong. Teams start with the technology - "we'll use GPT-4" or "we need a RAG pipeline" - and work backwards to the problem. M&E professionals are trained to do the opposite. You start with the outcome. What change are you trying to create? How will you know it's happened? What data would demonstrate that change?
When Louise began prototyping AI agents at RIVER for knowledge retrieval, she didn't start with the model architecture. She started with the evaluation framework. What does a good retrieval result look like? How do we measure whether the agent reduced manual triage time? What's the baseline we're comparing against?
That's M&E thinking applied to AI. And it's the reason her prototype delivered measurable results - an estimated 60% reduction in manual triage - while many AI prototypes produce impressive demos that can't demonstrate real impact.

Collecting Data From Imperfect Environments

AI engineers often work with clean datasets. M&E professionals never do.
Louise spent years collecting health data across Samoa's clinics and hospitals. Paper records. Inconsistent data entry. Staff who were overworked and under-resourced. Systems that went offline during cyclone season. You learn quickly that the data you have is never the data you want, and you build systems that work with what's actually available.
That instinct is gold in enterprise AI. Real organisational data is messy, incomplete, and politically sensitive. The people who know how to work with imperfect data - cleaning it, understanding its biases, knowing what it can and can't tell you - are the people who build AI systems that actually function in production.
When I was building the M&E framework in Samoa, I learned that the most important data question isn't "what should we collect?" It's "what are people already collecting, and how do we make that useful?" AI adoption works the same way. You start with what exists, not what you wish existed.
Louise Epa
AI Analyst & Research Consultant

Evaluating Whether Interventions Actually Work

Here's where M&E expertise becomes critical in ways most organisations haven't recognised yet. AI systems produce outputs. Someone needs to evaluate whether those outputs are actually good, whether they're improving outcomes, and whether they're creating unintended consequences.
This is evaluation. It's what M&E professionals do.
A large language model can generate a report summary. But is the summary accurate? Does it capture what matters? Did it miss context that changes the meaning? Is it consistently reliable, or does quality vary in patterns that need investigation?
These questions require evaluation frameworks, not just accuracy metrics. They require understanding what "good" looks like in context, not just in aggregate. M&E professionals think this way instinctively because they've spent careers in environments where the difference between an intervention looking successful and actually being successful is the difference that matters.

Presenting Findings to Decision-Makers

The final link in the chain. M&E professionals are translators. They take complex data, contextualise it, and present it in ways that support decisions. They know that a number without context is meaningless. They know that stakeholders have different needs - a board member wants trends and implications, a programme manager wants actionable detail.
AI adoption needs exactly this translation layer. AI outputs need to be contextualised, caveated, and connected to decisions. The people who can do this - who can stand between a model's output and a decision-maker's action - are the people who make AI adoption work in practice.

Te Ara Whakamua - The Path Forward

The M&E community is sitting on a competitive advantage it hasn't fully recognised. The skills that made you effective in programme evaluation - rigour, scepticism, contextual thinking, stakeholder communication - are the skills that AI adoption needs most.
Don't retrain away from M&E. Retrain into the intersection of M&E and AI. Learn enough about the technology to be a capable partner to engineers. But bring the evaluation discipline that most engineering teams lack.
Louise's path from Samoa's health data infrastructure to AI prototyping wasn't a career pivot. It was a direct line. The discipline transferred. The instincts transferred. The commitment to outcomes over outputs transferred.
If you've spent years asking "did this actually work?" then you already have the most important skill in AI adoption. The tools are new. The question is the same. If you want to explore how your evaluation background applies to AI, let's talk.