Skip to main content

What Enterprise RFPs Get Wrong About AI

Most AI RFPs are traditional software procurement templates with 'AI' bolted on. They actively select for the wrong partner. What to ask instead.
25 June 2024·9 min read
Isaac Rolfe
Isaac Rolfe
Managing Director
I've responded to a lot of AI RFPs over the past two years. Government ones. Enterprise ones. Some from organisations with mature procurement teams who've bought complex technology before. And almost every single one has been a traditional software RFP with the word "AI" find-and-replaced into the requirements section. The questions don't make sense. The evaluation criteria select for the wrong things. The format itself pushes vendors toward answers that sound good but mean nothing.

The Template Problem

Most procurement teams start an AI RFP the same way they'd start any technology procurement. They pull out the template they used for their last CRM or ERP purchase, update the requirements section, and send it out. The structure looks familiar: functional requirements, technical requirements, pricing, references, company overview.
The problem is that AI procurement has almost nothing in common with software procurement. Software is a finished product. You can demo it, test it, compare feature lists. AI is a capability that gets built, tuned, and refined against your specific data and workflows. Evaluating it like software is like evaluating an architect by asking them to provide the finished building in their proposal.

Five Things RFPs Get Wrong

1. Requiring Specific Model Names

I've seen RFPs that require vendors to specify which AI model they'll use. "The solution must utilise GPT-4" or "Please confirm whether your solution uses Claude or Gemini."
By the time the RFP closes, the model landscape will have shifted. By the time the contract is signed, it will have shifted again. By the time the solution is in production, the model named in the RFP may be two generations behind.
14
major foundation model releases in the first half of 2024 alone, across OpenAI, Anthropic, Google, and Meta
Source: AI Index Report, Stanford HAI, 2024
The right question isn't "which model will you use?" It's "how do you evaluate and select models for a given use case, and how do you handle model transitions when better options become available?" That tells you whether the vendor has a methodology or whether they've just picked whatever was trending when they built their product.

2. Demanding Fixed Pricing on Uncertain Scope

Traditional software has predictable scope. You need a CRM. It needs to handle contacts, opportunities, and reporting. The vendor prices it per user per month. Done.
AI work doesn't operate that way. The scope of an AI project depends on the data quality, the complexity of the domain, the integration surface, and the level of accuracy required. None of these are fully known until discovery is complete. An RFP that asks for fixed pricing before discovery is asking vendors to either pad their estimates by 200% to cover risk, or lowball the number and make it up on change requests.
Both options are bad for the buyer.
The better approach: price discovery separately. Run a paid discovery phase with one or two shortlisted vendors. Use what you learn to scope the build with real data, real constraints, and real cost estimates. You'll spend $20-40K on discovery and potentially save hundreds of thousands on a badly scoped build.

3. Treating Discovery as Optional

Related to the pricing problem: many RFPs list "discovery workshop" as an optional line item, or don't mention it at all. They assume the vendor can go straight from proposal to delivery.
This only works if you know exactly what you need, your data is clean and accessible, your integration points are documented, and the use case is well-understood by both parties.
That combination doesn't exist. Not in my experience, anyway.
Every AI project we've delivered has changed direction during discovery. Not because the original idea was wrong, but because the original idea was based on assumptions that didn't survive contact with real data. A client wanted automated document classification. Discovery revealed their documents weren't the problem. Their metadata was. The solution looked completely different from what the RFP described.
If your RFP doesn't account for discovery changing the scope, you're building a contract around assumptions that will break.

4. Evaluating on Feature Lists

Software RFPs use feature comparison matrices. Does the product support X? Yes/No. Score accordingly. Highest score wins.
AI vendors have figured this out. They fill in "Yes" for everything. Can your solution handle unstructured data? Yes. Can it integrate with our existing systems? Yes. Does it support real-time inference? Yes. Can it operate on-premise? Yes.
Every answer is technically true and practically meaningless. "Yes, our solution can handle unstructured data" might mean "we've built a production pipeline that processes 10,000 documents per hour" or it might mean "we could probably figure that out."
The feature matrix can't tell the difference. And since every vendor scores the same, you end up evaluating on price, which selects for the vendor who understood the least about what's actually involved.
What works better: ask for case studies with measurable outcomes. Not "we delivered an AI solution for a financial services client." Something specific. What was the accuracy before and after? How long did it take to reach production? What went wrong? A vendor who can't give you specifics either doesn't have the experience or doesn't measure outcomes. Both are disqualifying.

5. Ignoring Delivery Capability

The RFP format evaluates what you'll get. It rarely evaluates how you'll get it. For AI, the "how" matters more than almost anything else.
How does the vendor handle model drift in production? What happens when accuracy drops below threshold? How do they approach data privacy and sovereignty? What does their handover process look like? Will you own the IP? Can your team maintain the solution independently?
These questions don't fit neatly into a requirements matrix. They require conversation, demonstration, and reference checks with clients who've been through the full delivery lifecycle, not just the sales process.

What a Better AI RFP Looks Like

Drop the feature matrix. Drop the fixed-price requirement. Instead, structure your RFP around these five areas:
Delivery methodology. How does the vendor approach AI projects? What does their discovery process look like? How do they handle scope changes when data reality doesn't match assumptions?
Team capability. Who will actually work on your project? What's their experience with your industry? Ask for CVs, not company bios.
Outcomes evidence. Specific, measurable results from similar engagements. Accuracy improvements, processing time reductions, cost savings. With client references who can confirm them.
IP and data ownership. What do you own at the end? Can you take the solution to another vendor? Where does your data live? What are the exit terms?
Commercial flexibility. Can the vendor price discovery separately? Do they offer staged commitments that let you evaluate before committing to a full build? Is there a mechanism for scope adjustment based on what discovery reveals?
The best AI engagements I've been part of started with a conversation, not a compliance matrix. It doesn't reward vendors who are good at delivering AI.
Isaac Rolfe
Managing Director

The Uncomfortable Truth

The RFP format exists to create fairness and accountability in procurement. Those are good goals. But when the format actively selects for the wrong vendor, the fairness is superficial.
I've lost RFPs to vendors who scored higher on feature matrices but couldn't deliver what they promised. I've seen clients go through a six-month procurement process, select a vendor, and then spend another six months discovering that the vendor's "Yes" answers were aspirational rather than factual.
If your procurement process takes longer than the AI landscape takes to shift, the process needs to change. Not the technology.
The organisations getting the most from AI right now aren't the ones with the best RFPs. They're the ones who shortened their procurement cycle, ran paid discovery with two or three candidates, and made a decision based on demonstrated capability rather than documented promises.