Skip to main content

When Integrations Meet AI: The Complexity Multiplier

AI doesn't just add another integration point. It multiplies the complexity of every existing one. Ten months into AI delivery, what we expected vs what we found.
1 October 2023·9 min read
John Li
John Li
Chief Technology Officer
Mak Khan
Mak Khan
Chief AI Officer
We're ten months into delivering AI capabilities inside enterprise environments. The single biggest surprise has nothing to do with models, prompts, or training data. It's the way AI multiplies the complexity of every integration you already have.

What You Need to Know

  • Traditional integrations move structured data between systems. AI integrations need unstructured data, real-time feedback, confidence scoring, and fallback logic
  • Adding AI to an existing integration doesn't add one new concern. It adds four or five that interact with each other
  • Confidence thresholds change routing decisions, which changes error handling, which changes monitoring requirements
  • API design patterns that work for system-to-system integration break down when AI is in the chain

What We Expected

John and the engineering team came into this year with a solid understanding of enterprise integration. We've been building integrations for years. APIs, middleware, event-driven architecture, the full toolkit. When we started adding AI capabilities to existing client systems, we assumed the integration work would be similar. More complex, sure. But fundamentally the same kind of problem.
Mak joined the team to lead our AI engineering work. He came from a research background and assumed the hard part would be model selection, prompt engineering, and inference performance. Integration would be plumbing.
We were both wrong.

What We Found

AI Needs Different Data

Traditional integrations move structured records. A customer record from the CRM to the billing system. An order from the e-commerce platform to the warehouse. Fields map to fields. The data contract is explicit.
AI integrations need something different. A document processing capability needs the raw PDF, but it also needs metadata about who uploaded it, what type of document it is, what context it sits within. A classification model needs the structured record, but it also needs surrounding context that lives in email threads, notes fields, and file attachments.
I kept asking John's team for "the data" and they'd send me clean JSON from the API - but I needed the messy stuff. The structured data was maybe 30% of what the model actually needed to make a good decision.
Mak Khan
Chief AI Officer
This means every integration that feeds an AI capability needs to pull from more sources, handle more formats, and deal with more ambiguity than a traditional integration between the same systems.

Confidence Creates Branching

A traditional integration succeeds or fails. The data arrives or it doesn't. The API returns 200 or it returns an error. Your error handling is binary.
AI introduces a middle ground: the response that technically succeeds but might not be trustworthy. A document classification model returns a category with 62% confidence. Is that good enough? It depends. On the document type, the downstream process, the cost of getting it wrong, the availability of a human reviewer.
This means every AI integration needs confidence thresholds. And those thresholds create routing branches. High confidence: proceed automatically. Medium confidence: flag for review. Low confidence: reject and escalate. Each branch needs its own downstream integration, its own error handling, its own monitoring.
One integration point becomes three.
3-5x
more routing branches in AI integrations compared to traditional system-to-system integrations
Source: RIVER internal delivery analysis, 2023

Feedback Loops Are New

Traditional integrations are fire-and-forget. System A sends data to System B. Done. If the data was wrong, someone notices eventually and fixes the source.
AI integrations need feedback loops. When a human reviewer corrects an AI classification, that correction needs to flow back into the system. Not necessarily for model retraining (though sometimes for that too), but for threshold adjustment, confidence calibration, and audit trails. The integration isn't one-directional any more.
We've found that building the forward path for an AI integration takes about 40% of the effort. The feedback loop, the monitoring, and the fallback paths take the other 60%. This ratio surprised us. It shouldn't have.

Latency Changes Everything

Traditional API integrations run in milliseconds. System A calls System B, gets a response, moves on. Your timeout is 30 seconds and you almost never hit it.
AI inference takes longer. Seconds, sometimes tens of seconds for complex processing. This changes the integration architecture fundamentally. Synchronous request-response patterns that work fine for traditional integrations become bottlenecks. You need async processing, webhook callbacks, polling mechanisms, or streaming responses.
Every system downstream of the AI capability needs to handle the fact that results don't arrive instantly. UI components need loading states. Batch processes need queue management. SLAs need rethinking.
Adding AI document processing to a perfectly good synchronous integration meant the whole thing needed to go async. That's not an AI change - that's an architecture change, and it touches every system in the chain.
John Li
Chief Technology Officer

Patterns That Are Working

Ten months in, we've landed on some patterns that help manage this complexity.

Design for Three Paths, Not One

Every AI integration gets three paths from day one: high confidence (auto-proceed), medium confidence (human review), and low confidence (fallback to manual process). Don't optimise for the happy path and bolt on error handling later. The "unhappy" paths carry most of the traffic in early deployments.

Treat Confidence as a First-Class Data Field

Confidence scores travel with the data through every downstream system. They're not metadata. They're part of the record. This means every API contract, every database schema, every UI component that touches AI output needs to understand confidence. Build this in from the start.

Build the Feedback Path Before the Forward Path

Design how corrections flow back before you design how predictions flow forward. This forces you to think about the full lifecycle of an AI decision. What happens when the AI is wrong? Who corrects it? How does that correction get recorded? How does it affect future decisions?

Use Async by Default

Even if the AI inference is fast today, design the integration as async. Models get larger. Processing gets more complex. What takes 2 seconds today might take 15 seconds next quarter when the client wants you to process 50-page documents instead of single-page forms. Async architecture absorbs these changes without rearchitecting.

Version Your AI Contracts

Traditional API versioning isn't enough. AI integrations need to version the model, the prompt, the confidence thresholds, and the routing rules independently. When a client reports that "the AI changed," you need to know exactly which component changed and when.

The Multiplier Effect

The title of this post calls AI a complexity multiplier. That's deliberate. It's not additive.
If you have five integrated systems and you add AI to one integration point, you haven't added one new thing. You've changed the nature of that integration in ways that ripple outward. The systems upstream need to provide richer data. The systems downstream need to handle confidence and latency. The monitoring needs to understand probabilistic output. The error handling needs more branches.
We're learning to plan for this. We're getting better at estimating the true scope of AI integration work. But we're also honest that ten months isn't enough time to have all the answers. This field is moving fast, and the integration patterns are evolving with it.
What we can say with confidence: if you're planning an AI initiative and your integration estimate looks similar to a traditional integration estimate, it's too low. Multiply it.