The traditional hiring process, CV review, interview, reference check, offer, was designed in an era where impersonating someone required significant effort. That era is over.
The Numbers
1 in 4
job applicants could be fake by 2028, according to Gartner
That projection landed quietly in a Gartner report, but the implications are significant. One quarter of applicants. Not in some distant future, but within two years.
The fraud numbers are already moving. Consumers lost $12.5 billion to fraud in 2025, a 25% increase year on year. A growing share of that fraud involves synthetic identities, people who don't exist, applying for real jobs with real organisations.
72%
of business leaders identify AI-enabled fraud as a top operational challenge
How It Works
North Korean operatives have been caught using AI-generated deepfakes to apply for remote technology roles at Western companies. The playbook is straightforward: create a synthetic identity, pass the video interview using real-time face generation, get hired, and use the access to exfiltrate data or funnel salary payments back to the regime.
This is not speculation. The FBI and DOJ have prosecuted multiple cases. Dozens of Fortune 500 companies have been infiltrated. The attackers are refining their methods with each attempt.
But state-sponsored actors are just the high end of the spectrum. The same tools are available to anyone with a browser and a credit card. Fraudulent applicants seeking salary arbitrage, access to systems, or insider information don't need government backing. They need a $5 identity kit and a quiet room.
What HR Teams Need to Change
The standard hiring process has three verification points: the CV, the interview, and the reference check. All three are now vulnerable.
CVs can be generated in seconds. Work history, education, certifications, all fabricated with consistent detail. AI writing is good enough that the CV reads naturally. Employment verification services catch some of this, but they rely on databases that lag behind the fraud.
Video interviews can be attended by a deepfake. Real-time face and voice generation is mature enough to pass a 30-minute conversation. The interviewer sees a person, hears coherent answers, and forms a positive impression of someone who does not exist.
Reference checks can be handled by the same person using different voice clones on different phone numbers. Or by a network of co-conspirators providing pre-arranged references.
Practical Changes
Loading demo...
Try it: AI-powered CV matching
In-person verification for sensitive roles. If the role has access to financial systems, customer data, source code, or critical infrastructure, verify identity in person at least once during the process. This is inconvenient for remote-first organisations. It is less inconvenient than a breach.
Multi-factor identity verification during onboarding. Government-issued ID, verified through a service that checks against the issuing authority. Liveness detection that goes beyond a single selfie. Cross-referencing against professional registries and educational institutions.
Behavioural monitoring during probation. Unusual access patterns, data exfiltration attempts, and login anomalies should trigger review. Establish a baseline for normal behaviour in the first 90 days and flag deviations early.
Rethink remote-only hiring for high-trust roles. This is uncomfortable advice for organisations that have committed to remote work. The reality is that remote-only hiring, for roles with significant access privileges, carries a risk profile that most organisations have not fully assessed.
The Hiring Process Was Not Designed for This
Every step of the traditional hiring funnel assumes that the person you're talking to is real. That assumption held for decades. It doesn't hold now.
The organisations that adapt their processes early will avoid the worst outcomes. The organisations that assume "it won't happen to us" are exactly the ones it will happen to.
