AI-powered triage tools are arriving in New Zealand primary care. The promise is compelling: faster patient assessment, more consistent prioritisation, reduced pressure on reception and nursing staff. The risk is equally real. Triage is a clinical judgement, and removing the clinician from that judgement introduces risks that most AI vendors understate.
What You Need to Know
- AI triage tools in primary care can support clinical decision-making but cannot safely replace it. The contextual judgement that experienced triage nurses apply - tone of voice, patient history, community knowledge - is not captured by current models.
- The highest-risk failure mode is not incorrect prioritisation of urgent cases (which AI handles reasonably well) but incorrect deprioritisation of cases that appear routine but aren't. These are the cases where clinical intuition matters most.
- Liability for triage decisions in NZ sits with the practice. AI vendors disclaim clinical responsibility in their terms of service. The practice bears the risk.
- Effective AI triage requires a clinician-in-the-loop model where AI handles initial categorisation and a nurse or clinician reviews and confirms before action is taken.
Where AI Triage Works
I want to be clear that I'm not opposed to AI in triage. I've seen it work well in specific applications.
Structured symptom collection. AI tools that gather patient-reported symptoms before the clinical assessment are genuinely useful. They give the triage nurse a structured starting point rather than beginning from scratch. The patient describes their symptoms through a guided questionnaire, the AI organises the information, and the clinician starts their assessment with better information.
Volume management. In practices handling high call volumes, AI can help categorise contacts by urgency so that the most time-sensitive cases reach clinical staff first. This is a workflow optimisation, not a clinical decision, and it's a legitimate use of the technology.
35%
reduction in average wait time for urgent triage when AI-assisted categorisation is used alongside clinical review
Source: British Journal of General Practice, AI-Assisted Triage Study, 2024
After-hours routing. AI triage tools that help patients decide whether they need after-hours care, can wait for a morning appointment, or should go to the emergency department can reduce unnecessary ED presentations while ensuring genuinely urgent cases aren't delayed. Again, this works best as guidance rather than a gate.
Where It Fails
The failure mode that concerns me most isn't the obviously urgent case. A patient describing chest pain will be flagged as high priority by any competent triage tool, AI or otherwise. The dangerous cases are the ones that look routine.
A middle-aged man calling about a sore shoulder. The AI categorises it as musculoskeletal, non-urgent. An experienced triage nurse might ask a few more questions and discover the pain started during exertion, is accompanied by shortness of breath, and the patient has a family history of cardiac events. That's not a sore shoulder. That's a potential cardiac event.
12%
of cases initially categorised as non-urgent by AI triage tools were reclassified as urgent after clinical review
Source: Annals of Emergency Medicine, AI Triage Accuracy Study, 2024
The difference between the AI assessment and the clinical assessment is context. The nurse knows this patient. She knows the community. She can hear the hesitation in his voice. She asks the follow-up question that the algorithm doesn't know to ask because the patient didn't mention the relevant symptom.
This isn't a gap that will be closed by better models. It's a fundamental limitation of text-based and structured-input assessment. Clinical triage requires reading between the lines, and that remains a human skill.
The Liability Question
This is the part that should keep practice managers awake at night. In New Zealand, clinical responsibility for triage decisions rests with the practice, regardless of what tools are used to support those decisions.
Every AI triage vendor I've reviewed includes language in their terms of service that explicitly disclaims clinical responsibility. "This tool is for informational purposes only." "Clinical decisions remain the responsibility of the healthcare provider." "This is not a diagnostic tool."
If the vendor won't accept clinical responsibility for the triage decisions their tool makes, that tells you everything about where they think the risk lies. It's with you.
Rikimata Massey
Health CIO Advisory
Which means that if an AI triage tool deprioritises a patient who subsequently has a serious adverse outcome, the practice bears the liability. Not the vendor. Not the AI developer. The practice.
This doesn't mean AI triage tools shouldn't be used. It means they should be used with clinical oversight that's documented, consistent, and defensible. A clinician-in-the-loop model isn't just good practice. It's risk management.
The Right Model
The clinician-in-the-loop model for AI triage looks like this.
AI handles initial data collection and categorisation. The patient provides symptom information through a structured tool. The AI organises this into a preliminary assessment with a suggested urgency level.
A clinician reviews every categorisation. Not just the flagged ones. Every categorisation. The review can be rapid - a nurse scanning a queue of assessments and confirming or adjusting the AI's recommendation. But it must happen before the categorisation drives any clinical action.
The clinician has override authority and easy escalation. The system must make it trivially easy to override the AI's recommendation. If the clinician disagrees, their judgement prevails immediately. No friction, no justification required.
The practice audits regularly. How often does the clinician override the AI? On which types of cases? Are there patterns in the overrides that suggest the AI is consistently misjudging certain presentations? These audits inform both the practice's clinical governance and the vendor's product improvement.
This model captures the genuine benefits of AI triage - speed, consistency, structured data - while preserving the clinical judgement that keeps patients safe. It's more work than letting the AI run autonomously. It's also the only approach I'd be comfortable recommending to a practice.
AI triage isn't a technology decision. It's a clinical governance decision. And clinical governance decisions require clinicians in the loop.
