On 13 December 2012, Cyclone Evan made landfall in Samoa. Winds over 200 km/h. Flooding that swept entire villages into the sea. I was volunteering with the Red Cross, and within hours, the question wasn't "what technology do we have?" It was "what still works?"
The Short Version
- The systems that survived Cyclone Evan weren't sophisticated. They were simple, redundant, and designed for the worst case.
- Paper backups, clear communication chains, and local knowledge proved more reliable than any digital system when infrastructure failed.
- The same principles apply to technology systems: production-ready doesn't mean feature-complete. It means it works when things go wrong.
What Happened
Cyclone Evan was the most destructive cyclone to hit Samoa in decades. It killed at least 14 people, displaced thousands, and caused an estimated USD $200 million in damage across Samoa and neighbouring countries.
$200M+
estimated damage from Cyclone Evan across Samoa and the Pacific region
Source: Government of Samoa Post-Disaster Needs Assessment, 2013
I was in Apia when the storm hit. Power went down. Phone networks went down. Internet, obviously, went down. Roads were blocked by fallen trees and debris. The airport closed. The hospital was overwhelmed.
In the Red Cross response, we had to coordinate food distribution, medical supplies, shelter allocation, and missing persons tracking across a country where every communication channel had just been wiped out.
The Systems That Worked
Here's what I remember most clearly: the things that worked were the things nobody had thought much about beforehand. They weren't impressive. They were boring.
Paper registers. Every Red Cross distribution point had paper-based tracking for who received supplies. When we couldn't update the central database (because there was no connectivity), those paper records became the only source of truth. Weeks later, when connectivity returned, we reconciled. But in the critical first days, paper kept things moving.
Radio networks. Amateur radio operators and the Samoa Red Cross's HF radio network became the primary communication backbone. Not the internet. Not satellite phones (which were overwhelmed). Radio. A technology most people would consider outdated. Sometimes the case for boring technology is the strongest one you can make.
Local knowledge. The village matai system meant that every community had a designated leader who could account for their people. When we needed to know how many people in a village needed shelter, we didn't run a census. We asked the matai. They knew. They'd always known.
Pre-positioned supplies. The supplies that saved lives in the first 48 hours were already on the island. Not in a warehouse in Auckland waiting for a flight. Already there, already distributed to regional hubs, already accessible without roads or airports.
What Didn't Work
The digital coordination system we'd set up required internet access. Gone.
The centralised reporting framework required connectivity to a server in Apia. When the server room flooded, that was gone too.
Contact databases on laptops that ran out of battery within 12 hours, with no way to charge them.
These weren't bad systems. In normal conditions, they worked well. But they'd been designed for normal conditions, the same mistake we see in enterprise AI projects that skip readiness work. When conditions stopped being normal, they stopped being useful.
Why I Think About This When I Build Systems Now
I've carried the Cyclone Evan experience into every piece of work I've done since. At the National Health Service in Samoa, building the M&E framework, I insisted on paper-based fallback protocols for every digital process. My colleagues sometimes thought this was excessive. I'd lived through what happens when you don't have them.
When I look at technology systems now, I ask a specific set of questions that come directly from disaster response:
What happens when the connection drops? Not "will it reconnect automatically?" but "what does the user do in the meantime? Can they keep working? Is their data safe?"
Where's the single point of failure? In Apia, it was the server room. One flood, and the entire reporting system was gone. Every system has a single point of failure. The question is whether you've found it before reality does.
Who can use this without training? In a disaster, the people operating systems aren't always the people who were trained on them. If the system requires specialised knowledge to use, it will fail when the specialist isn't available.
Does it degrade gracefully? The best systems don't just work or fail. They work at reduced capacity. They lose features but keep core function. The paper registers were a degraded mode of the digital tracking system, but they maintained the core function: knowing who had received supplies and who hadn't.
The Real Meaning of Production-Ready
In technology, "production-ready" usually means the features are complete, the tests pass, and the deployment pipeline works. That's a starting point.
Real production readiness, the kind I learned in Samoa, means asking harder questions. What happens when the database is unreachable? What happens when the third-party API goes down? What happens when the user is stressed, distracted, and has no time to troubleshoot?
A system that works perfectly in ideal conditions and fails in real conditions isn't production-ready. It's a prototype with good marketing.
I don't say this to be cynical. I say it because I've stood in a flooded coordination centre with no power, no internet, and a list of villages that needed food by morning. The systems that got food to those villages were the simple ones. The ones with paper backups and radio fallbacks and local people who knew their communities.
Technology should learn from that. Not every system will face a cyclone. But every system will face its own version of one, a server crash, a security breach, a sudden spike in users, a critical dependency going offline. The question is whether it was built for that moment, or only for the demo.
I'm still learning how to bring these lessons into the work I do now. I've recently joined RIVER, and I'm looking at how AI systems handle pressure, how they fail, what their fallback modes look like. But the instinct is the same one I developed in December 2012, standing in the rain in Apia, trying to figure out which village to reach first.
Build for the worst day. Everything else is a bonus. If you're building systems that need to hold up when things go wrong, we'd like to hear from you.
