Skip to main content

CV Matching Without the Bias

AI CV matching that reduces bias rather than encoding it. How to build recruitment screening that is faster, fairer, and more effective.
2 February 2026·7 min read
Tim Hatherley-Greene
Tim Hatherley-Greene
Chief Operating Officer
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
The promise of AI in recruitment is speed: screen a thousand CVs in minutes instead of days. The risk is that speed amplifies existing biases rather than eliminating them. We have spent the last year building CV matching systems that are deliberately designed to reduce bias, not encode it. Here is what we have learned.

The Bias Problem

AI recruitment tools have a well-documented bias problem. Amazon's infamous CV screening tool, trained on a decade of hiring data, learned to penalise CVs that contained the word "women's" and downgrade graduates from all-women's colleges. The model was not malicious. It was faithful. It learned the patterns in the data, and the data reflected a decade of biased hiring decisions.
This is the fundamental challenge: any AI system trained on historical hiring data will learn the biases embedded in that history. If your organisation has historically under-hired from certain demographics, the model will learn to deprioritise candidates from those demographics.
The solution is not to avoid AI in recruitment. It is to build AI recruitment tools that are explicitly designed to counteract bias rather than replicate it.
62%
of NZ enterprises concerned about bias in AI recruitment tools
Source: HRNZ, AI in Recruitment Survey, 2025

How Bias-Aware CV Matching Works

Skills-Based Matching

The first principle is to match on skills and capabilities, not on proxies. Traditional CV screening often uses proxies: specific university names, previous employer brands, years of experience at a particular level. These proxies correlate with demographics in ways that introduce bias.
Skills-based matching extracts specific capabilities from the CV and matches them against the role requirements. "Can this person do the job?" is a fairer question than "does this person's background look like our existing team?"
This requires defining roles in terms of capabilities rather than credentials. "Five years' experience in a Big Four firm" is a proxy. "Demonstrated capability in financial analysis, stakeholder management, and regulatory reporting" is a skills-based requirement.

Blind Processing

The matching system processes CVs with identifying information removed: name, address, age indicators, educational institution names (replaced with qualification levels), and profile photos. The model scores candidates on capability match without access to demographic signals.
This is not foolproof. Language patterns, extracurricular activities, and career trajectories can carry demographic signals even when explicit identifiers are removed. But blind processing removes the most direct sources of bias and forces the model to focus on capability evidence.

Fairness Auditing

Every matching run produces a fairness report alongside the candidate ranking. The report analyses whether the scoring distribution shows statistical bias across demographic dimensions (where demographic data is available and consent has been given for this analysis).
If the model consistently scores one demographic group lower, that is a signal to investigate the scoring criteria, not to accept the output. The fairness audit is not a compliance checkbox. It is an active feedback mechanism that keeps the system honest.
Tania brings a critical perspective here. Fairness in the NZ context carries specific obligations. Te Tiriti o Waitangi, the Human Rights Act, and the Privacy Act create a framework that generic international AI fairness approaches do not fully address. Bias-aware CV matching for NZ organisations needs to account for these obligations, not just algorithmic fairness metrics.

Try It

Loading demo...

What We Have Learned

Define "fair" before you build. Fairness is not a technical property. It is a values decision. Does "fair" mean equal representation in shortlists? Equal scoring distributions? Equal opportunity to demonstrate capability? Different definitions lead to different system designs. Make the decision explicitly before building.
Skills taxonomies matter enormously. The quality of the matching depends entirely on the quality of the skills taxonomy. A taxonomy that maps closely to how your industry describes capability will produce better matches than a generic framework. Invest in building a domain-specific skills taxonomy.
Candidate experience is adoption-critical. Tim's adoption expertise has been essential here. A CV matching system that candidates do not trust will not attract the best applicants. Transparency about how AI is used in the screening process, the ability for candidates to see and correct their extracted skills profile, and clear communication about human oversight all build the trust that drives adoption.
Feedback loops close the gap. When a hiring manager overrides the AI ranking, that is a signal. Sometimes the override reflects legitimate contextual judgement the model could not capture. Sometimes it reflects bias the model was designed to avoid. Tracking overrides and analysing their patterns is how the system improves and how the organisation learns about its own decision-making.

The Adoption Challenge

Tim and I have seen a consistent pattern: HR teams are enthusiastic about AI screening until they see a shortlist that looks different from what they expected. The discomfort is the system working. If the AI-generated shortlist looked identical to a human-generated one, the AI would be replicating existing patterns rather than improving on them.
Managing this discomfort is an adoption challenge, not a technical one. It requires clear communication about why the shortlist looks different, evidence that capability-matched candidates perform well, and leadership commitment to the values the system embodies.

Implementation Considerations

Privacy is non-negotiable. CV data is personal information under the Privacy Act. The system must handle it with appropriate consent, security, access controls, and retention limits. Candidates must know how their data is processed and have the right to request its deletion.
Human decision-making remains. AI produces a ranked shortlist with capability match scores. Humans make hiring decisions. The AI does not decide who gets hired. It decides who gets seen. This distinction matters legally, ethically, and practically.
Regular auditing. The fairness audit should run on every matching cycle, with quarterly reviews of aggregate patterns. Bias can emerge gradually as the model processes more data. Continuous monitoring catches drift that point-in-time audits miss.
The goal is not perfect objectivity. It is systematic improvement over the biases that human-only screening reliably produces. AI cannot eliminate bias. It can make bias visible, measurable, and addressable.