Grant-making in New Zealand distributes billions of dollars annually across thousands of applications. Community grants, research funding, government programmes, philanthropic initiatives. The assessment process is remarkably similar everywhere: a panel of reviewers reads applications, scores them against criteria, and debates the rankings. It is earnest, well-intentioned, and structurally inconsistent. AI can bring the consistency that fairness demands.
The Fairness Problem
Grant review has the same consistency challenges as any panel-based assessment, with an additional dimension: the outcomes directly affect communities. Inconsistent scoring does not just waste time. It means some communities receive funding and others do not, based partly on which reviewer read their application and how carefully.
Tania and I have both served on funding panels. The pattern is familiar:
Reviewer fatigue. A reviewer assessing 40 applications over a weekend does not give the 38th application the same attention as the 3rd. The quality of assessment degrades with volume. Applications reviewed later in the batch systematically receive less thorough evaluation.
Interpretation variance. "Demonstrates community need" is a judgement call. One reviewer interprets need through demographic data. Another through community voice. A third through comparison with other applications. All three are legitimate interpretations. None produce the same score.
Writing quality bias. A well-written application scores higher than an equally meritorious but poorly written one. This systematically disadvantages applicants who lack professional grant-writing capability: small community organisations, volunteer-led groups, and organisations serving communities where English is a second language.
Unconscious familiarity bias. Reviewers naturally favour applications from organisations they recognise. Established organisations with track records get the benefit of the doubt. New organisations with equal merit but less visibility get more scrutiny.
35%
average score variance between reviewers on the same grant application
Source: RIVER, grant process analysis, 2025
What AI Grant Review Does
Application Analysis
The AI reads each application and extracts structured information relevant to each assessment criterion: stated need, proposed activities, expected outcomes, budget justification, organisational capability, and alignment with funding priorities.
This extraction ensures that every application is analysed against every criterion, regardless of how clearly the applicant organised their submission. An applicant who buried their strongest evidence in an appendix gets that evidence surfaced.
Consistency Scoring
Each application is scored against a detailed rubric that operationalises the assessment criteria. Not "demonstrates community need" but "provides quantified evidence of the problem being addressed, identifies the affected population, and connects the proposed activities to the identified need."
The rubric-based scoring produces consistent baseline scores across all applications. The 38th application gets the same analytical rigour as the 3rd.
Gap and Strength Identification
For each application, the AI identifies specific strengths (well-supported claims, clear logic models, realistic budgets) and gaps (unsupported assertions, missing budget detail, unclear outcomes). This structured analysis gives panel members a starting point for their review.
Equity Analysis
Tania's contribution here is foundational. AI grant review must actively address equity, not just avoid bias. The system tracks scoring patterns across applicant demographics: organisation size, location, sector, and community served. If the scoring systematically disadvantages small organisations or rural applicants, that pattern is flagged for panel discussion.
Beyond statistical equity, Tania has designed the framework to account for the assessment challenges that specific communities face. An application from a small iwi organisation, written collaboratively by community members rather than a professional grant writer, should not be penalised for prose quality when the substance is strong.
Loading demo...
The Panel Process
AI grant review does not replace the panel. It restructures the panel process:
Before AI: Read all applications. Score individually. Convene. Reconcile wildly different scores. Debate. Vote.
With AI: Review AI analysis and baseline scores for each application. Focus panel discussion on applications where human judgement adds the most value: borderline cases, competing priorities, and equity considerations. Make funding decisions with better information and more time for deliberation.
The panel's expertise is redirected from reading and scoring (which AI does more consistently) to deliberation and judgement (which humans do better).
Implementation for Grant-Making Organisations
- Criteria operationalisation (2-3 weeks). Transform assessment criteria into specific, measurable rubrics. This is the most important step and requires input from experienced panel members.
- System configuration (2-3 weeks). Configure the AI for your application format, criteria, and rubrics.
- Equity framework (1-2 weeks). Define the equity dimensions to monitor and the thresholds for flagging.
- Pilot round (3-4 weeks). Run AI-assisted assessment alongside traditional assessment for a current funding round. Compare outcomes.
- Refinement and deployment (2-3 weeks). Adjust rubrics and equity parameters based on pilot findings. Deploy for the next round.
Total: 10-15 weeks. The pilot round is essential for building panel confidence and calibrating the system against experienced reviewer judgement.
The Fairness Dividend
When grant review is consistent, several things improve simultaneously:
- Applicant trust. Organisations that apply and are declined can be confident the assessment was thorough and fair, which encourages reapplication rather than disengagement.
- Panel efficiency. Reviewers spend their limited volunteer time on deliberation, not data extraction.
- Accountability. Every score has a documented rationale. The basis for funding decisions is transparent and auditable.
- Equity visibility. Patterns of advantage and disadvantage become measurable, which is the first step to addressing them.
Grant-making organisations hold a public trust: distributing resources fairly to the communities that need them most. AI-assisted review serves that trust by bringing the consistency that fairness requires.

