Skip to main content

The $25 Million Zoom Call Where Nobody Was Real

Engineering firm Arup lost $25.6 million after a finance worker joined a video call where every participant, including the CFO, was an AI-generated deepfake. The era of trust-by-default is over.
10 November 2025·5 min read
Isaac Rolfe
Isaac Rolfe
Managing Director
In early 2024, a finance worker at Arup, one of the world's largest engineering firms, joined a video call with the company's CFO and several senior colleagues. The CFO instructed the worker to authorise a series of wire transfers. The worker complied. Every person on that call was an AI-generated deepfake.
$25.6M
lost to a single deepfake video call at Arup (HKD 200 million)

What Actually Happened

The employee received a message from someone claiming to be Arup's UK-based CFO. Suspicious at first, the worker agreed to a video call to verify. On the call, everything checked out. Faces matched known colleagues. Voices sounded right. Mannerisms were convincing.
Over the course of that single call, the employee authorised 15 separate wire transfers totalling HKD 200 million, approximately $25.6 million USD.
The fraud was only discovered days later when the employee followed up through internal channels. By then, the money was gone.

This Is Not an Isolated Incident

Arup's loss made headlines, but it was one data point in a trend that's accelerating faster than most organisations realise.
$40B
projected global deepfake fraud losses by 2027, up from $12.3B in 2023
In Q1 2025 alone, total deepfake fraud losses exceeded $200 million globally. Financial services, professional services, and government agencies are the primary targets, but no sector is immune. The tools required to generate convincing deepfakes in real time are commercially available and improving monthly.

Why Traditional Verification Failed

The Arup attack worked because it exploited the single most common verification protocol in enterprise: visual confirmation on a video call.
Think about how your organisation handles high-value approvals today. A request comes through email or messaging. Someone says "let's jump on a call to confirm." You see the person's face, hear their voice, and proceed.
That protocol assumed video calls were trustworthy. They no longer are.
The attack surface has shifted. Identity is now the perimeter, not the network. Firewalls, VPNs, and encrypted channels protect data in transit. None of them verify that the person on the other end of the call is who they claim to be.

What Enterprise Leaders Need to Change

Multi-channel verification for high-value actions. No single channel, including video, should be sufficient to authorise transfers above a threshold. Require confirmation through a separate, pre-established channel. A callback to a known number. A signed message through an internal system. Something the attacker can't replicate from a single compromised channel.
Duress and anomaly protocols. Train staff to recognise urgency pressure, a hallmark of social engineering. If someone on a call says "this needs to happen now, skip the usual process," that is the moment to slow down.
Assume compromise. The question is no longer "could someone fake a video call?" The answer is yes, cheaply and convincingly. The question is "what controls remain effective when video verification is unreliable?"

The Bigger Picture

Arup is a sophisticated, global organisation with mature security practices. They still lost $25.6 million to a technique that, two years earlier, would have been considered science fiction.
The gap between what AI can generate and what humans can detect is widening. Every enterprise needs to audit its verification chain and ask: which of these steps assume that seeing is believing?
Because that assumption just became a $25.6 million liability.