A startup can launch to 10 users, break things, and iterate. An enterprise web app launching to 500 internal users does not get that luxury. When someone's job depends on the tool working on Monday morning, "we will fix it in the next sprint" is not an answer. Here is how we plan for a launch that works from day one.
What You Need to Know
- Enterprise launches are fundamentally different from consumer launches. You can't soft-launch to internal staff whose daily work depends on the system
- Performance testing with realistic data volumes and concurrent users is non-negotiable for enterprise apps
- The rollout strategy matters as much as the software. Training, support channels, and feedback loops determine adoption
- Most enterprise launch failures aren't engineering. They're organisational
The Enterprise Launch Problem
Consumer apps launch to a trickle and scale. Enterprise apps launch to everyone at once, because the business has decided that Tuesday is the day the old system stops and the new one starts. There's a training programme scheduled, a communications plan in motion, and 500 people expecting to log in at 8:30am and do their jobs.
This is a different kind of pressure. There's no public beta. There's no "early access." There are 500 people, a deadline, and a system that must work.
We've launched systems to audiences this size three times now. Each time, the launch itself was the most stressful week. And each time, the preparation that made the difference wasn't the code. It was everything around the code.
Performance Under Real Load
Test with realistic data
The most common performance failure we see is a system that works beautifully with 50 test records and collapses with 50,000 real ones. Database queries that are instant with small datasets become minutes-long when the table has actual production data.
Before any enterprise launch, we load the database with realistic volumes. Not synthetic data, actual data (or anonymised equivalents) in the same shapes and sizes the production system will handle. Then we test every query, every report, every search.
53%
of mobile users abandon a site that takes longer than 3 seconds to load
Source: Google Think with Google, 2016
Google's research is about mobile, but the principle applies. People don't wait. Enterprise users are more forgiving than consumers, but not by much. If the dashboard takes 8 seconds to load with real data, they'll be on the phone to the helpdesk before lunch.
Test with concurrent users
A system that works for one user might fail for 50 using it simultaneously. Database connection pools run out. Memory usage spikes. Cache invalidation causes stale data. API rate limits get hit.
We use load testing tools to simulate realistic concurrent usage. Not "1,000 users hitting the home page," but "200 users doing different things at the same time" - submitting forms, running reports, uploading files, searching records. The traffic pattern that matches how the system will actually be used.
Identify the slow paths
Every application has hot paths (used constantly) and cold paths (used occasionally). The hot paths need to be fast. The cold paths need to not break everything else.
A report that takes 30 seconds to generate is fine if it runs in the background and notifies the user when it's ready. It's not fine if it locks the database and makes the whole application unresponsive for everyone.
Infrastructure Ready
Monitoring from day one
You can't fix what you can't see. Application monitoring, error tracking, and performance dashboards need to be live before the users arrive.
When something goes wrong on launch day (and something always does), the difference between a 5-minute fix and a 3-hour investigation is whether you can see what's happening. Error logs, response times, database query performance, memory usage. All visible, all alerting.
Have a rollback plan
If the launch goes badly wrong, can you revert? The answer needs to be yes, and the process needs to be tested. A rollback plan that exists only in theory is not a plan.
We test our rollback procedures the week before launch. Switch back to the old system, verify data integrity, confirm that nothing was lost. If we can't demonstrate a clean rollback, we're not ready to launch.
Scale the infrastructure early
Don't launch on the minimum viable infrastructure and plan to scale up if needed. Launch on infrastructure that can handle 2x your expected load, then scale down once you understand actual usage patterns. The cost of over-provisioning for a month is trivial compared to the cost of a failed launch.
The Rollout Plan
Pilot group first
Even when the full launch is on a fixed date, we run a pilot group one to two weeks early. Ten to twenty users from different roles, using the real system with real data. They find the things testing didn't catch: confusing labels, missing permissions, workflows that don't match reality.
The pilot group's feedback drives the final round of changes. And when the full launch happens, those pilot users become informal champions who help their colleagues.
Training that matches the work
Generic training decks don't work. "Here's the dashboard, here's the sidebar, here are the settings" tells people what the system has. It doesn't tell them how to do their job in it.
We structure training around tasks, not screens. "Here's how you submit a new application." "Here's how you approve a request." "Here's how you run your weekly report." Each training scenario follows a real workflow from start to finish.
The most effective training material we have produced was a one-page cheat sheet for each role. More useful than any training session.
John Li
Chief Technology Officer
Support channel for the first two weeks
Launch week needs dedicated support. Not the standard IT helpdesk queue. A direct channel where users can get help quickly. A Slack channel, a dedicated email address, or people physically available in the office.
The volume of support requests in the first two weeks tells you everything about the quality of the build and the training. Track every request, categorise them, and fix the patterns. If 30 people ask the same question, the interface needs to change, not the training.
The Organisational Piece
Stakeholder alignment
The system can work perfectly and still fail if the organisation isn't aligned. The project sponsor needs to visibly support the change. Managers need to use the system themselves, not just require their teams to use it. The messaging needs to be consistent: this is how we work now.
We've seen technically successful launches fail because middle management quietly allowed teams to keep using the old process. The system becomes optional, adoption stalls, and six months later someone asks why the new system isn't working.
Measure adoption, not just uptime
A successful launch isn't "the system didn't crash." It's "people are using it to do their work." Track login rates, task completion, and support requests. Compare against the baseline.
If adoption is below 80% after two weeks, something is wrong. Find out what, and fix it. The first month after launch determines whether the system becomes part of how the organisation works or whether it becomes the tool people tolerate.
The Checklist
For any enterprise launch to 200+ users:
- Load tested with realistic data volumes and concurrent users
- Monitoring and alerting live before launch
- Rollback plan tested and documented
- Infrastructure provisioned at 2x expected load
- Pilot group run at least one week before full launch
- Training structured around tasks, not features
- Dedicated support channel for first two weeks
- Stakeholder alignment confirmed
- Adoption metrics defined and tracking from day one
None of this is glamorous. Most of it isn't code. But it's the difference between a launch that works and one that becomes a cautionary tale.
