Your team doesn't need to become AI engineers. But they do need a new set of skills to work effectively in an organisation where AI is part of the toolkit. The skills gap isn't where most leaders think it is.
The Skills Gap Nobody's Talking About
When executives hear "AI skills gap," they think about hiring data scientists and ML engineers. That's one gap, and it's getting well-publicised attention. But there's a larger, quieter gap: the skills that every team member - from claims assessors to marketing managers to customer service leads - needs to work alongside AI effectively.
This is a literacy gap, not a hiring gap. And it requires training, not recruitment.
The Four Skills
1. AI Judgement
The ability to evaluate AI outputs critically.
AI generates confident, plausible text. Your team needs to distinguish between "the AI got this right" and "the AI sounds right but is wrong." This is a new professional skill that didn't exist two years ago, and it's not optional.
Tim: What this looks like in practice is the ability to question AI outputs with the same rigour you'd apply to a junior colleague's work. You wouldn't accept a report from a new hire without review. You shouldn't accept AI outputs without review either. But the review needs to be informed - you need to know what AI is likely to get wrong, where hallucination is most common, and what "confidence" actually means in context.
Isaac: The dangerous middle ground is people who trust AI outputs completely or people who dismiss them completely. Both waste the tool's value. The skill is calibrated trust - knowing when to lean on the AI and when to verify independently.
How to develop it: Structured exercises where teams evaluate AI outputs against known correct answers. Start with your domain. Have the AI generate reports, analyses, or summaries using your real data, and then have domain experts assess accuracy. Document the patterns - where is the AI reliably good? Where does it consistently fail?
2. Effective Prompting
The ability to get useful outputs from AI tools.
"Prompt engineering" sounds technical. In practice, it's communication. The same skill that makes someone good at writing clear emails makes them good at writing clear prompts. But there are specific techniques that dramatically improve results.
- Specificity. "Summarise this document" produces generic results. "Summarise the key compliance implications of this document for a NZ insurance company with 500+ employees" produces useful results.
- Context. Give the AI the context it needs. Role, audience, format, constraints. The more context you provide, the better the output.
- Iteration. Good prompting is conversational. Ask, review, refine, ask again. Treating AI as a conversation partner rather than a search engine produces better results.
How to develop it: Workshops where teams practice prompting against real work tasks. Share effective prompts across the team. Build a library of prompts that work well for your specific domain.
36%
of enterprise workers reported using AI tools at work by mid-2023, most without formal training
Source: Microsoft Work Trend Index, May 2023
3. Workflow Integration
The ability to identify where AI adds value in existing processes.
This is the strategic skill. Not everyone needs it deeply, but team leads and process owners need to be able to look at their workflows and identify:
- Which tasks are repetitive and rule-based (strong AI candidates)
- Which tasks require nuanced judgement (human tasks, possibly AI-assisted)
- Where human time is spent on low-value activities that AI could handle
- Where AI outputs need human review before action
Isaac: The best AI use cases don't come from the AI team. They come from domain experts who understand their own workflows well enough to see where AI fits. Your claims assessors know which parts of claims processing are tedious and rule-based. Your advisers know which client questions come up repeatedly. Empowering these people to identify AI opportunities is more valuable than hiring a consultant to do it.
How to develop it: Process mapping exercises focused on identifying AI opportunities. Not technology-first - workflow-first. "Where do we spend time on tasks a machine could handle?" is a better question than "where can we use AI?"
4. Data Awareness
The ability to understand what data AI needs and what data shouldn't be shared with AI.
Every team member who uses AI tools needs a basic understanding of:
- What data the AI can see and what it can't
- What data should never be shared with external AI services
- How to evaluate whether AI outputs are based on current, reliable data
- Basic data privacy and governance principles as they apply to AI
Tim: This isn't about making everyone a data governance expert. It's about practical awareness. "Should I paste this client contract into ChatGPT?" is a question every team member will face. They need enough knowledge to answer it correctly - and the answer, right now, is almost certainly no.
How to develop it: Clear, practical guidelines. Not a 50-page policy document. A one-page reference card: what data can go into AI tools, what can't, what to do if you're unsure.
The Training Approach
Don't run a one-day AI training course and call it done. AI literacy develops through ongoing practice and support.
Week 1: Basic concepts. What AI can and can't do. Data guidelines. First prompting exercises.
Weeks 2-4: Applied practice. Teams use AI tools on real work tasks with support available. Regular check-ins to share learnings and address questions.
Ongoing: Champions programme. Regular skill-sharing sessions. Updated guidelines as tools and capabilities evolve.
The goal isn't AI expertise. It's AI fluency - comfortable enough to use the tools effectively, critical enough to use them responsibly.
AI Skills Priority by Role Impact
Source: RIVER Group, Enterprise AI Skills Assessment, 2023

