Skip to main content

The Enterprise Training Gap in AI

Enterprise AI training is failing because it's designed for the technology, not the learner. Here's what learning science says about building capability that sticks.
10 August 2024·6 min read
Dr Josiah Koh
Dr Josiah Koh
Education & AI Innovation
Isaac Rolfe
Isaac Rolfe
Managing Director
Enterprise AI training has a problem. Thousands of organisations are running AI workshops, webinars, and boot camps. Billions are being spent. And the gap between "employees trained in AI" and "employees using AI in their work" is widening, not closing. The training industry is booming. The capability gap persists.

What You Need to Know

  • The enterprise AI training gap isn't about training quality. It's about training design
  • Most AI training teaches concepts (what AI is) when it should teach application (how to use AI for your specific job)
  • Learning science provides clear principles for capability building that most AI training programmes ignore
  • The fix: role-specific, workflow-embedded, practice-based training with follow-up support
$16B
spent globally on AI training in 2024
Source: Training Industry Inc., 2024
14%
of trained employees report using AI regularly 90 days after training
Source: LinkedIn Workplace Learning Report, 2024

Why Current Training Fails

Concept-Heavy, Application-Light

The typical enterprise AI training agenda: "What is AI? Types of AI. How LLMs work. The AI landscape. Ethics considerations. Demo. Q&A." This is interesting. It's not useful. The claims processor who attends this workshop leaves with a better understanding of what AI is and no idea how to use it in their Tuesday morning workflow.
AI in education isn't about replacing teachers. It's about designing better learning. The same applies to enterprise AI training. It's not about explaining AI. It's about designing experiences where people build capability through practice on real tasks.
Dr Josiah Koh
Education & AI Innovation

Generic, Not Role-Specific

A workshop designed for "all employees" is designed for nobody. The finance team needs AI for forecasting and analysis. The HR team needs AI for screening and communication. The operations team needs AI for scheduling and reporting. Generic training that covers none of these specifically leaves all of them without actionable skills.

Event-Based, Not Process-Based

Training as a one-time event creates a spike of awareness that decays rapidly. Without follow-up practice, coaching, and support, the capability built in a workshop disappears within weeks.
Learning science is clear on this: capability builds through distributed practice over time, not through concentrated information delivery. A 4-hour workshop is the worst possible format for building lasting AI capability.

What Learning Science Says

Spaced Practice

Distribute learning over time rather than concentrating it. Four 1-hour sessions over four weeks produces more durable learning than one 4-hour session. Each session builds on the previous one, with practice in between.

Contextual Learning

Learning that happens in the context where it will be applied transfers more effectively than learning in a classroom. AI training should happen at the desk, with real data, on real tasks.

Active Practice

Passive information consumption (lectures, demos) produces knowledge. Active practice (doing tasks with AI, making mistakes, getting feedback) produces capability. The ratio should be at least 70% practice, 30% instruction.

Social Learning

People learn from peers more effectively than from instructors for applied skills. Champion networks, peer practice sessions, and collaborative experimentation produce more capability than formal training programmes.

Redesigning Enterprise AI Training

1. Start With the Workflow, Not the Technology

For each role or team, map the specific tasks where AI could help. Design the training around those tasks. The claims processor learns AI by using it to classify their actual claims. The HR coordinator learns by using it to screen their actual applications.

2. Distribute Over Time

Replace the half-day workshop with a 6-week programme:
  • Week 1: 1-hour introduction + first task practice
  • Week 2: Practice with support
  • Week 3: 1-hour session on advanced techniques + new task
  • Week 4: Practice with peer support
  • Week 5: 1-hour session on evaluation and quality
  • Week 6: Demonstration and sharing
Each week builds on the previous one. Practice happens between sessions with real work tasks.

3. Embed Support

Provide accessible support throughout the learning period: a champion in each team, a quick-response help channel, and regular check-ins. The support addresses problems in real time, when the learner is motivated to solve them.

4. Measure Behaviour, Not Satisfaction

Track whether people are using AI in their work 30, 60, and 90 days after training. If they're not, investigate why and adapt. Satisfaction surveys tell you whether people enjoyed the experience. Behaviour data tells you whether the training worked.
We don't measure success by how many people attended the training. We measure it by how many people are still using AI three months later. That's the number that matters.
Isaac Rolfe
Managing Director

The enterprise AI training gap won't close with better workshops. It'll close with fundamentally different training design: role-specific, workflow-embedded, distributed over time, practice-based, and measured on behaviour change. The learning science is clear. The investment is modest. The gap is entirely solvable.