I've spent my career in research and evaluation. When I started working with technology companies, I was struck by something: the decisions are enormous and the evidence behind them is thin. Organisations spend hundreds of thousands of dollars on systems based on a vendor demo, a reference call, and a gut feeling. In any other discipline, that would be considered reckless. In enterprise technology, it's Tuesday.
What You Need to Know
- Most enterprise technology decisions rely on vendor-supplied evidence, which is inherently biased
- A structured evaluation framework doesn't need to be academic to be rigorous
- Three types of evidence matter: outcome evidence, fit evidence, and implementation evidence
- Better evaluation upfront prevents the costly failures that poor decisions create downstream
The Evidence Gap
Enterprise procurement runs on RFPs, vendor presentations, and analyst reports. Each of these has a structural bias.
RFPs ask vendors to describe their own product against your criteria. Every response will claim to meet every requirement. The differentiation is in the nuances, and nuances don't survive a compliance matrix.
Vendor demos show the product at its best. Curated data. Ideal workflows. A skilled presenter who knows exactly which screens to show and which to skip. The demo bears the same relationship to reality as a show home does to actual living.
Analyst reports assess products against general criteria, not your specific context. A Gartner Magic Quadrant tells you which vendors are leaders in a broad category. It doesn't tell you whether a specific product will work for your specific organisation with your specific constraints.
55%
of enterprise software purchases fail to meet expectations within two years
Source: Panorama Consulting, 2018
When more than half of purchases underperform, the problem isn't bad products. It's bad evaluation.
A Practical Framework
This framework isn't academic. It's designed for real procurement timelines and real organisational constraints. Three types of evidence, each addressing a different question.
Outcome Evidence: Does This Work?
Before evaluating whether a product fits your needs, ask whether it works at all. This sounds obvious. In practice, it's often skipped.
Independently verified results. Not the case studies on the vendor's website. Those are marketing. Ask for references you can speak to directly. Ask specific questions: What was the implementation timeline? What was the actual cost versus estimate? What problems emerged? Would you do it again?
Published research. For established product categories, academic and industry research exists. It's not always easy to find, and it's not always current. But a product category with no independent evidence of effectiveness should raise questions.
Pilot data. When possible, run a small-scale pilot with real users and real data before committing to a full implementation. A two-week pilot with twenty users will tell you more than six months of evaluation meetings.
The most honest thing a vendor can do is connect you with a customer who had a difficult implementation. If they only offer success stories, they're not being transparent about the range of outcomes.
Dr Tania Wolfgramm
Chief Research Officer
Fit Evidence: Does This Work for Us?
A product can be excellent and still be wrong for your organisation. Fit evidence assesses the match between the product and your specific context.
Process alignment. Map your actual workflows against the product's assumed workflows. Not the idealised processes in your documentation. The real ones, including the workarounds, exceptions, and manual steps. Where do they align? Where do they diverge? How significant are the gaps?
Technical fit. Evaluate integration requirements against reality. What APIs exist? What data formats are supported? What's the authentication model? How does the product handle the specific volume, complexity, and variety of your data?
Organisational fit. This is the one most evaluations miss. Does the implementation approach match your organisation's capacity for change? Do you have the internal skills to configure and maintain this product? Is the vendor's support model compatible with your operating model?
Implementation Evidence: Can This Be Delivered?
The best product with the worst implementation still fails. Implementation evidence assesses whether the delivery is realistic.
Vendor track record. Not their best case. Their average case. What percentage of implementations are delivered on time and on budget? What's the typical variance? Vendors who won't answer this question are answering it.
Internal capacity. Does the organisation have the people, time, and bandwidth to support this implementation? Who will be the internal project lead? What existing commitments will they need to deprioritise?
Risk identification. What are the known risks? Integration complexity, data migration challenges, change management requirements, timeline dependencies. Every project has risks. The question is whether they've been identified and planned for, or whether they're sitting in a column marked "low probability."
Applying This Practically
I'm not suggesting every technology purchase needs a three-month evidence review. The depth of evaluation should match the size of the decision.
Small purchases (under $50K). A structured conversation using the three evidence types. A few reference calls. An honest internal assessment of fit and capacity. Half a day of work.
Medium purchases ($50K-$250K). A formal evaluation framework. Multiple reference calls. A technical proof of concept. An organisational readiness assessment. One to two weeks of work.
Large purchases (over $250K). All of the above, plus a pilot programme. Independent verification of vendor claims. External expertise if the internal team lacks evaluation capability. Four to six weeks of work.
The investment in evaluation is proportional to the cost of getting it wrong. A $500K system that fails isn't just a $500K loss. It's the opportunity cost, the organisational disruption, the loss of trust, and the cost of starting over.
The Gut-Feel Problem
I'm not against intuition. Experience-based judgement is valuable, especially from people who've been through multiple implementations. What I'm against is unexamined intuition. The feeling that a product is right without articulating why.
When someone says "I just have a good feeling about this vendor," the right response is "what specifically gives you that feeling?" Often, the answer reveals genuine insight that can be validated. "Their demo showed they understand our workflow." Good. Let's verify that with a technical proof of concept and a reference call. The intuition becomes evidence.
Good decisions in enterprise technology aren't about eliminating judgement. They're about supporting judgement with evidence, so that when the project gets difficult (and it will), the original decision stands up to scrutiny.
