Skip to main content

The Case for Boring Technology

Boring tech is proven tech. The real risk in enterprise isn't using old tools. It's chasing new ones.
1 August 2021·6 min read
John Li
John Li
Chief Technology Officer
There's a talk by Dan McKinley from 2015 called "Choose Boring Technology" that I come back to every year. The core argument: every organisation has a limited number of "innovation tokens" to spend, and most waste them on technology choices that don't create competitive advantage. Six years later, it's more relevant than ever.

What You Need to Know

  • "Boring" technology means well-understood, widely deployed, extensively documented, and predictably behaved
  • The cost of a technology choice isn't the licence or the implementation. It's the ongoing operational knowledge required to keep it running
  • New technology has unknown failure modes. Boring technology has documented failure modes with known solutions
  • Enterprise teams should spend innovation tokens on business problems, not infrastructure problems

What "Boring" Means

Boring doesn't mean outdated. PostgreSQL is boring. It's also one of the most capable databases available. Node.js is boring. React is boring. AWS S3 is boring. These technologies have been in production at scale for years. Their failure modes are documented. Their edge cases are known. Stack Overflow has answers for every error message.
Boring means you're not the first team to encounter a problem. Someone else already hit it, documented it, and published the solution. Your on-call engineer at 3am can find the fix in ten minutes, not ten hours.
73%
of CIOs said reducing technology complexity is a top priority for 2021
Source: Gartner CIO Survey, 2021
New technology is the opposite. The documentation is incomplete. The community is small. The edge cases haven't been discovered yet because not enough people have been in production long enough. When something goes wrong, and something always goes wrong, you're figuring it out from first principles. That's interesting and educational and a terrible way to run production systems that people depend on.
I've never seen an enterprise project fail because the technology was too boring. I've seen plenty fail because the technology was too interesting.
John Li
Chief Technology Officer

The Innovation Token Model

McKinley's model is useful. Every organisation has a limited capacity for new, unproven things. Call these innovation tokens. You have maybe three. Spend them wisely.
If your business differentiator is your recommendation algorithm, spend an innovation token there. Use a novel approach, experiment, push boundaries. That's where innovation creates value.
But your database? Your deployment pipeline? Your authentication system? These aren't differentiators. They're infrastructure. Use the boring option. Postgres, not the graph database that just launched. Standard CI/CD, not the new platform that promises to be ten times faster. OAuth 2.0 with a proven library, not a custom authentication layer.
The teams that get this wrong spend all their innovation tokens on infrastructure and have none left for the things that actually matter to their business. They end up with a cutting-edge tech stack that delivers a mediocre product.

The Hidden Cost of Interesting Technology

When you choose a new technology, you're not just choosing a tool. You're committing to:
Hiring constraints. You need people who know it. If the talent pool is small, you're paying a premium or training from scratch. Both are expensive. Both are slow.
Knowledge risk. If only two people on your team understand the technology and one leaves, you've lost 50% of your operational capacity for that system. With boring technology, the replacement can be productive in days.
Migration burden. New technologies change fast. APIs break between versions. The approach you built on in version 2 might be deprecated in version 3. Boring technology changes slowly and deliberately, because the maintainers know that millions of production systems depend on stability.
Debugging difficulty. When something fails in Postgres, the error message is clear, the documentation is thorough, and someone has written a blog post about your exact problem. When something fails in the database that launched six months ago, the error message might be unclear, the documentation might not cover your use case, and you're opening a GitHub issue.
60%
of enterprise technology decisions are driven by developer preference rather than business requirements
Source: Thoughtworks Technology Radar Survey, 2021

When New Is Worth It

I'm not against new technology. I'm against new technology as a default. There are legitimate reasons to choose something unproven.
The problem genuinely can't be solved with existing tools. This is rarer than most teams think. Before reaching for a new technology, make sure you've actually exhausted the capabilities of what you already have.
The new tool is significantly better in a dimension that matters. Not marginally better. Not theoretically better. Measurably, significantly better at something your business actually needs.
You have the operational maturity to absorb the risk. Small team with one project? The risk of an unproven technology is high. Large team with strong observability, thorough testing, and experienced operators? The risk is manageable.

The RIVER Approach

We default to boring. Our standard stack uses well-established tools that we know deeply. Not because we're conservative, but because our clients need reliability more than they need novelty. When we recommend a technology choice, we're accountable for it working in production for years, not just passing a proof of concept.
The boring choice is often the brave choice. It means telling a client "we don't need the shiny thing" when everyone else is pitching it. It means admitting that Postgres is better for this use case than the purpose-built database that just raised $50 million. It means prioritising the outcome over the CV line item.