Skip to main content

AI and the Skills Gap No One Talks About

The real AI skills gap isn't technical. It's the ability to identify AI opportunities, evaluate outputs, and manage AI-augmented workflows. The 'AI-adjacent' skills crisis.
5 November 2024·9 min read
Tim Hatherley-Greene
Tim Hatherley-Greene
Chief Operating Officer
Every enterprise leader knows about the AI talent shortage. Data scientists are expensive. ML engineers are scarce. But the skills gap that's actually stalling enterprise AI isn't technical at all. It's the gap in what we call "AI-adjacent" capabilities. The people who can identify where AI adds value, evaluate whether it's working, and redesign workflows around it.

What You Need to Know

  • The technical talent gap is well-documented. The AI-adjacent gap is larger, less visible, and more damaging. You can hire or outsource technical capability. You can't outsource the ability to identify the right problems, evaluate AI outputs in your domain, and manage the human side of AI adoption.
  • Organisations with strong AI-adjacent skills deploy AI 3× faster than those investing only in technical capability.
  • The three AI-adjacent skill categories are: opportunity identification, output evaluation, and workflow redesign. Most training programmes address none of them.
  • This gap exists at every level: executives who can't evaluate AI investments, managers who can't identify AI use cases in their teams, and frontline staff who can't work effectively alongside AI.
82%
of enterprise AI training budgets are spent on technical skills - data science, ML engineering, prompt engineering
Source: Deloitte, State of AI in the Enterprise, 6th Edition, 2024
14%
of enterprises have formal training for AI output evaluation or workflow redesign
Source: McKinsey & Company, The State of AI in 2024
Enterprise AI Training: Where Budget Goes vs Where the Gap Is
Source: Deloitte, 2024; McKinsey, 2024

The Three AI-Adjacent Skill Gaps

1. Opportunity Identification

The most valuable AI skill in any organisation is the ability to look at a business process and recognise where AI could add value, and where it can't.
This isn't a technical skill. It requires deep understanding of the business process, knowledge of what AI can realistically do, and the judgement to distinguish high-value from low-value use cases.
What this looks like in practice: A claims manager who notices that 40% of their team's time goes to manually extracting data from supporting documents, and recognises that document extraction is a well-solved AI problem with clear ROI. That insight is worth more than any data science hire, because it points the technical team at the right problem.
Why it's rare: Most people in the organisation either understand the business deeply (but don't know AI's capabilities) or understand AI (but don't know the business). The intersection is where value lives, and almost nobody is being trained for it.

2. Output Evaluation

AI generates outputs. Someone needs to evaluate whether those outputs are good. Not in a technical sense (model accuracy, precision, recall) but in a domain sense. Is this claims assessment reasonable? Does this contract summary capture the material risks? Is this customer communication appropriate in tone and content?
We keep asking 'is the AI accurate?' when the real question is 'does anyone in the room know enough to tell?' The most dangerous AI deployment is one where nobody can evaluate whether the output is right.
Tim Hatherley-Greene
Chief Operating Officer
What this looks like in practice: A senior underwriter reviewing AI-generated risk assessments, not to catch every error, but to calibrate the system. Understanding where the AI is reliably excellent, where it's occasionally wrong, and where it consistently misses nuance. This calibration expertise is domain knowledge applied to AI, not AI knowledge applied to the domain.
Why it's rare: Output evaluation requires both domain expertise and an understanding of AI's failure modes. Most domain experts haven't been taught how AI fails. They either trust it completely or reject it entirely. Neither is useful.

3. Workflow Redesign

The biggest missed opportunity in enterprise AI isn't the technology. It's the failure to redesign work around AI rather than just bolting AI onto existing processes.
When you automate 60% of a claims processor's manual work, you haven't just freed up time. You've fundamentally changed the role. The claims processor is now a claims reviewer, auditor, and exception handler. The workflow needs to change. The metrics need to change. The training needs to change.
What this looks like in practice: A team lead who can take an AI capability and redesign their team's daily workflow around it. New triage processes, new escalation paths, new quality assurance steps, new KPIs that reflect AI-augmented work rather than manual work.
Why it's rare: Workflow redesign has historically been a specialist consulting skill. Now every manager with AI-augmented teams needs it. The management development pipeline hasn't caught up.

Why Training Programmes Miss This

Most enterprise AI training falls into two categories:
Technical upskilling. Data science bootcamps, ML engineering courses, prompt engineering workshops. These build important capability but address the minority of the skills gap.
Awareness programmes. "What is AI?" presentations, executive briefings, innovation demos. These build enthusiasm but not capability. Knowing that AI exists is not the same as knowing where it adds value in your specific context.
What's missing is the middle: practical, domain-specific training that teaches people to identify AI opportunities in their work, evaluate AI outputs in their domain, and redesign workflows to capture AI value.
faster AI deployment in organisations with formal AI-adjacent skills programmes
Source: MIT Sloan Management Review, The AI-Powered Organisation, Summer 2024

What to Do About It

For Executives

Stop thinking of AI skills as a technical investment. Your technical team (internal or external) will handle the models, the data pipelines, the infrastructure. Your gap is the hundreds of people across the organisation who need to work with AI effectively.
Allocate at least 30% of your AI training budget to AI-adjacent skills. Prioritise managers and team leads. They're the ones who will identify use cases and redesign workflows.

For Managers

Start mapping your team's work into three categories:
  1. Tasks AI can do well: repetitive, pattern-based, high-volume
  2. Tasks AI can assist with: complex judgement supported by AI analysis
  3. Tasks that require human capability: relationship, creativity, novel problem-solving
This mapping exercise is itself an AI-adjacent skill. It's also the foundation for every AI use case in your area.

For Individuals

The most career-valuable AI skill isn't prompt engineering. It's the ability to bridge AI capability and domain expertise. If you understand your business deeply and can articulate where AI adds value, you are dramatically more valuable than someone who can write better prompts.
Learn what AI can and can't do. Not at the technical level, but at the practical level. What kinds of documents can it process reliably? What kinds of analysis does it handle well? Where does it fail? This pattern recognition, applied to your domain, is the skill that matters.

The Compounding Effect

Organisations that invest in AI-adjacent skills create a flywheel. More people can identify opportunities, so more valuable use cases surface. Better output evaluation means AI systems improve faster. Better workflow redesign means higher adoption and more value captured.
The compound advantage isn't just about technology foundations. It's about human foundations. The organisations building both are the ones that will define the next decade.
The skills gap nobody talks about is the one that determines whether your AI investment delivers value. Technical capability is necessary. AI-adjacent capability is sufficient.