In 2014, the General Manager of Samoa's National Health Service handpicked me to build something that didn't exist yet: a national monitoring and evaluation unit. No framework. No templates. No precedent within the organisation. Just a clear need and a mandate to figure it out. What followed was two years of the most challenging and rewarding work of my career.
What You Need to Know
- Samoa's National Health Service had no centralised M&E framework before 2014. We co-developed one in collaboration with Counties Manukau DHB in New Zealand
- The framework produced quarterly reports to the GM, the Board, and Cabinet - creating accountability loops that hadn't existed before
- The biggest lesson: start with what people already collect, not what you wish they collected. Trust comes before data quality
- This experience directly shaped my approach to AI and data infrastructure - the principles transfer completely
Starting Point
To understand what we were building, you need to understand what Samoa's health system looked like.
~200,000
population served by Samoa's National Health Service across two main islands
Source: Samoa Bureau of Statistics, Population Census 2016
1
national hospital (Tupua Tamasese Meaole Hospital), supplemented by district hospitals and rural health centres
Source: Samoa Ministry of Health, Health Sector Plan 2008-2018
The health system served the entire country from a network of facilities ranging from the national hospital in Apia to small rural health centres in villages accessible only by unpaved roads. Data collection happened at each of these points, but it flowed upward inconsistently. Some clinics kept careful paper records. Others had basic logs. The national picture was incomplete.
Before the M&E unit, I'd spent three years as Overseas Referrals Coordinator and Data Officer. That role taught me the system from the inside. I managed pathology specimen referrals to LabPlus in Auckland and Histopath, and maintained national surveillance data for STIs, HIV/AIDS, Hepatitis, and TB. I knew which data existed, where it lived, who collected it, and - just as importantly - where the gaps were.
The Collaboration That Made It Possible
We didn't build the framework alone. The GM arranged a collaboration with Counties Manukau DHB in New Zealand, which had experience with Pacific health data systems and understood both the clinical and cultural context. It was through this collaboration that I first worked alongside Dr Tania Wolfgramm, years before I joined RIVER. Tania was developing her Fanau Ola Framework at Counties Manukau at the time, embedding Pacific and Māori worldviews into healthcare planning. Our work ran in parallel - her framework shaping how Pacific health systems think about outcomes, my M&E unit shaping how Samoa's system measures them.
That partnership was critical. Counties Manukau brought technical M&E expertise and reporting frameworks that had been tested in a health system with similar Pacific populations. We brought deep knowledge of Samoa's health infrastructure, its constraints, and its people. The framework we co-developed was neither a copy of the New Zealand model nor something invented from scratch. It was adapted, designed for the realities we actually faced. The connection with Tania stuck. When I eventually joined RIVER years later, it was because that shared history meant we already understood how each other thinks about data, health, and Pacific communities.
Those realities were significant. Intermittent internet connectivity meant we couldn't rely on cloud-based systems. Power outages were common during cyclone season. Staff at rural health centres were already stretched thin, and adding reporting burden without adding value to their daily work would guarantee failure.
What Worked
Starting with existing collection, not ideal collection. This was the most important decision we made. The instinct in any new data framework is to design the perfect data collection instrument and roll it out. We did the opposite. We mapped what every facility was already collecting, in whatever format they were using. Then we built the framework around that existing data, adding new collection points only where absolutely necessary.
This approach meant our first reports weren't perfect. They had gaps and inconsistencies. But they were real. They reflected what was actually happening in the system, not what we hoped was happening. And because we didn't ask clinic staff to learn an entirely new system, adoption was manageable.
Making reports useful to the people filling them out. This is where most top-down reporting frameworks fail. Data flows upward to decision-makers, but nothing flows back down. Trust comes before data quality. The people doing the hard work of collecting and entering data never see what it produces. They feel like they're feeding a machine that gives nothing back.
We designed the reporting cycle to return value at every level. Clinic staff received facility-level summaries that helped them see their own performance in context. District health officers got comparisons across their facilities. The national team got the full picture. When a nurse in a rural health centre could see that her clinic's vaccination rates were tracking above the district average, the data entry stopped feeling like administrative burden and started feeling like professional feedback.
Quarterly reporting as a forcing function. We established a quarterly reporting cycle that went all the way to Cabinet. This created accountability that hadn't existed before. When you know the GM is presenting your numbers to the Board, and the Board is presenting to Cabinet, the data matters. Errors get corrected. Gaps get filled. The cycle itself became a quality improvement mechanism.
I also relieved as Executive Assistant to the GM for eleven months during this period. That dual perspective - seeing the data infrastructure from both the technical side and the executive decision-making side - shaped how I thought about what reports needed to contain. The GM didn't need raw numbers. She needed context, trends, and clear implications for resource allocation.
What I'd Do Differently
The framework relied heavily on personal relationships. I knew the staff, they trusted me, and that trust drove adoption. But it also meant the system was vulnerable to personnel changes. If I'd had more time, I would have invested more in documentation and training that could survive staff turnover.
I'd also push harder for digital data collection from the start, even simple mobile-based forms. Data sovereignty conversations should have started earlier too. We were cautious about technology adoption given the infrastructure constraints, and that caution was reasonable. But even basic digital collection would have reduced the manual data compilation work that consumed so much time each quarter.
What It Taught Me
What Louise built in Samoa is the kind of foundational data work that makes everything else possible. You can't have AI, you can't have analytics, you can't have evidence-based policy without someone first doing the hard work of building the system that produces reliable data. That work is unglamorous and essential.
Dr Tania Wolfgramm
Chief Research Officer
Building Samoa's M&E framework taught me that data infrastructure is fundamentally about people. The technical components - the databases, the forms, the reporting tools - are the easy part. The hard part is building trust with the people who collect the data, making the system valuable to them, and creating accountability structures that sustain quality over time.
Every principle I apply to AI work today traces back to those two years in Samoa. Start with what exists. Build for the people who use the system, not just the people who read the reports. Create feedback loops. Design for the constraint, not the ideal. Make the system resilient to imperfect conditions, because conditions are always imperfect.
When I prototype an AI agent now, I'm still asking the same questions I asked in that first year of the M&E unit. What data do we actually have? Who collects it, and why would they care about quality? What decision does this output support? How will we know if it's working?
The tools change. The questions don't. If you're building data foundations for AI, we bring this same approach.
I'm a proud mother and I hold the Matai/Chiefly Title of So'oalo from Samauga, Savai'i, Samoa. The work I did at NHS sits alongside those roles as something I carry with quiet pride. It was nation-building, in a small and very practical way.
