Two months ago, we shipped a feature in three days that should have taken five. The client was delighted. The team was proud. And then we spent the next three sprints cleaning up everything that shipped alongside it. The net result: we lost two weeks of productive time by saving two days. It's the kind of lesson that sticks with you, because the maths is brutal and it doesn't lie.
What You Need to Know
- Shipping faster than the work demands creates hidden rework that costs more than the time saved
- Velocity metrics can incentivise speed over quality without anyone noticing
- Deliberate delivery means choosing the right pace for each piece of work, not the fastest pace possible
- The hardest part is telling a happy client that you need to slow down
What Happened
The client needed a reporting dashboard. Standard requirements: data from three sources, filtered views, export to CSV. We'd built similar things before. The team estimated five days. The sprint was tight - other commitments were competing for attention - and the senior developer said they could probably do it in three if they cut some corners.
I said yes. That was the mistake.
The corners they cut were reasonable in isolation. Hardcoded a configuration that should have been dynamic. Skipped the error handling for an edge case that "probably won't happen." Used an inline style instead of extending the design system. Wrote the tests for the happy path but not the failure modes.
Each shortcut saved a few hours. Collectively, they created a feature that worked in demo but broke in production within a week.
The Aftermath
The configuration issue surfaced first. A new data source needed to be added, and the hardcoded approach meant modifying production code instead of updating a config file. That's a deployment where it should have been a setting change.
Then the edge case happened. Of course it did. A user exported a report with no data, and the system returned a 500 error instead of an empty file. Simple to fix. But the client saw it, lost a small amount of confidence, and asked for a "stability review" of the feature. That review took a day.
The missing tests meant that when we fixed the edge case, we introduced a regression in the filtering logic. Found it in QA, not production, but the fix-and-retest cycle cost another day.
4.5x
average cost of fixing a defect post-release vs during development
Source: IBM Systems Sciences Institute, Relative Cost of Fixing Defects, 2020
The Real Cost
I tracked the total time spent on the dashboard feature from start to stable:
- Initial build: 3 days (saved 2 days from estimate)
- Config refactor: 1 day
- Edge case fix and client review: 1.5 days
- Regression fix and retest: 1 day
- Updated tests: 0.5 days
- Client confidence meeting: 0.5 days
Total: 7.5 days. The five-day estimate would have been 5 days. We "saved" two days and spent 2.5 extra days fixing the consequences. Plus an unquantifiable cost to client confidence.
Why Speed Is Seductive
Enterprise delivery has a velocity problem. Not too little velocity - too much faith in velocity as a measure of progress. Story points completed per sprint. Features shipped per quarter. The metrics reward speed. Nobody measures the rework that speed creates.
We celebrate the sprint where we shipped ten stories. But the reality is, if half of them come back as rework, you didn't ship ten stories. You shipped five and created five problems.
Tim Hatherley-Greene
Chief Operating Officer
Clients contribute to this too. They see fast delivery and think the team is performing well. They see measured delivery and wonder if they're getting value for money. It takes a confident relationship to say "we could ship this faster, but the quality cost isn't worth it."
What Deliberate Delivery Looks Like
We've made three changes since the dashboard incident.
Estimates include quality time. When the team says five days, that includes testing, documentation, and design system compliance. If someone proposes doing it in three, the question is "what are we cutting, and what's the cost of cutting it?" The answer has to be explicit.
Shortcuts require a ticket. If we take a shortcut to meet a deadline, we create a ticket for the remediation work immediately. Not "we'll get to it." A ticket with a priority, in the backlog, visible to the client. This makes the cost of the shortcut visible.
We track rework. Every story that comes back for fixes gets tagged. At the end of each quarter, we can see which features generated rework and why. The pattern is consistent: the features that were rushed generate two to three times more rework than the ones that were delivered at the estimated pace.
The Hard Conversation
The hardest part of deliberate delivery is the conversation with a client who liked the fast version. "We shipped the last feature in three days, why does this one take five?" The honest answer is that the three-day feature cost you more in total. The five-day version will be done once.
Not every client wants to hear that. But the ones who do are the ones we build the best relationships with. They understand that sustainable delivery is a business advantage, not a compromise. Speed matters, but the right kind of speed is what gets you there.
