Skip to main content

Building Internal AI Champions

Enterprise AI adoption doesn't spread through training programmes. It spreads through people - the internal champions who make AI real for their teams.
28 June 2023·9 min read
Tim Hatherley-Greene
Tim Hatherley-Greene
Chief Operating Officer
I've been studying how enterprise technology spreads inside organisations for twenty years. Email. Intranets. Cloud. Mobile. Collaboration tools. The pattern is remarkably consistent: adoption doesn't flow from training programmes or executive mandates. It flows from individuals. Specific people within teams who try the new thing, see value, and pull their colleagues along. AI is no different. The organisations that will scale AI fastest are the ones building these internal champions deliberately.

What You Need to Know

  • AI adoption in enterprises spreads peer-to-peer, not top-down
  • Internal champions are more effective than training programmes because they provide ongoing, contextualised support
  • Champions don't need to be technical. They need to be curious, trusted, and willing to experiment
  • A structured champion programme accelerates adoption by 40-60% compared to training alone
  • Start with 3-5 champions across different business units, not a centralised AI team
3.4x
more likely to adopt when a trusted colleague recommends it
Source: Prosci ADKAR Model, 2022
40-60%
faster adoption with champion programmes vs training alone
Source: Forrester, 2023

Why Training Alone Doesn't Work

Let me describe the standard enterprise AI training programme. A vendor or internal team runs a half-day workshop. "Here's what AI can do. Here's how ChatGPT works. Here are some prompts to try." Attendees leave with a certificate and a PDF of slides. A week later, they're back to their normal routines, and the AI tools sit unused.
The problem isn't the training content. It's the context gap. Generic AI training doesn't connect to specific job functions, workflows, or pain points. The claims processor who attended the workshop can see that AI is impressive. They can't see how it fits into their Tuesday morning workflow.
Champions bridge this gap. They're embedded in the teams. They know the workflows. They can translate generic AI capability into specific, "here, let me show you how this helps with that report you hate doing" demonstrations that stick.

What Makes a Good Champion

Not What You'd Expect

The instinct is to pick the most technical person in each department. Don't. Technical skill is useful but not essential for a champion role. What matters more:
Curiosity. They try new tools without being asked. They're the person who figured out the new expense system before anyone else.
Trust. Their colleagues listen to them. Not because of their title but because of their track record and relationships.
Pragmatism. They care about whether something works, not whether it's technically elegant. They'll find the 80% solution and run with it.
Communication. They can explain things simply to people who aren't technical. They don't use jargon. They show, not tell.
Willingness to fail publicly. Champions need to experiment in front of their teams, including when the experiments don't work. This normalises learning and reduces the fear of trying.

Where to Find Them

Look for the people who are already doing champion-like behaviour for other tools. Who was the first person in accounting to use the new reporting tool? Who in HR figured out how to automate the leave approval workflow? Who in operations built a spreadsheet that everyone depends on?
These people are natural adopters. They enjoy solving problems with tools. AI is just the next tool.

Building the Programme

Phase 1: Select and Equip (Weeks 1-3)

Identify 3-5 champions across different business units. Not more. Start small, learn, then scale.
Give them:
  • Access to AI tools relevant to their work (not just ChatGPT - specific enterprise tools if available)
  • A 2-hour briefing on AI capabilities and limitations (not a full training programme)
  • Protected time: 2-4 hours per week to experiment and support their teams
  • A direct line to the AI programme team for questions and escalation
  • Permission to fail. Explicitly. In writing if necessary.

Phase 2: Experiment and Document (Weeks 3-8)

Each champion identifies 2-3 specific tasks in their team's workflow where AI could help. Not transformative use cases. Small, practical ones. "Can AI help me draft the weekly status report?" "Can it summarise these meeting notes?" "Can it classify these incoming requests?"
They experiment. They document what works and what doesn't. They share findings with their teams informally.
The best champion wins aren't impressive. They're mundane. "I used AI to draft 30 emails in the time it used to take me to write 5." That's the kind of result that makes a sceptical colleague think: I want that.
Tim Hatherley-Greene
Chief Operating Officer

Phase 3: Spread and Support (Weeks 8-16)

Champions start showing their colleagues. Not in formal training sessions. In casual, hands-on demonstrations. "Hey, let me show you what I've been doing with this." Side-by-side, at the desk, using real data from real workflows.
This is where peer-to-peer adoption happens. The colleague sees someone they trust, doing work they recognise, getting results they want. The barrier to trying drops from "I need to learn AI" to "show me how you did that."

Phase 4: Formalise and Scale (Month 4+)

By now you have evidence. Which use cases delivered value? Which teams adopted fastest? What resistance patterns emerged? Use this evidence to expand the programme: more champions, more teams, documented playbooks for the use cases that worked.
The champions who performed well become the nucleus of a broader AI capability function. Not a centralised team, but a distributed network of practitioners who keep AI adoption moving forward.

Common Mistakes

Picking champions by seniority. The department head might be enthusiastic about AI, but they're not going to sit with a team member and troubleshoot a prompt. Pick practitioners, not managers.
Overloading champions. If a champion's regular workload doesn't change, they won't have time to experiment and support their teams. Protected time isn't optional. Two hours a week minimum.
Expecting champions to be experts. They don't need to understand how language models work. They need to understand their team's workflows and be willing to experiment with AI tools. Expert support comes from the central programme team.
Measuring the wrong things. Don't measure "number of AI workshops delivered" or "percentage of staff trained." Measure: tasks where AI is being used regularly, time saved on specific workflows, and the number of people who've tried AI tools independently (not in a training session).
Treating it as a one-off programme. Champions need ongoing support, new tools and capabilities to explore, and regular connection with each other. A quarterly champions meetup where they share wins and challenges keeps the momentum going.

The Multiplier Effect

A single champion in a 20-person team can shift adoption from "nobody uses AI" to "half the team uses AI for something" within three months. Five champions across the organisation create a network effect: they share techniques, solve problems collectively, and create a critical mass of adoption that makes AI feel normal rather than novel.
This is how every successful technology adoption has worked. Not through mandates. Through people who care enough to help their colleagues see the value.

If you're planning enterprise AI adoption, start here. Don't start with a platform evaluation or a vendor shortlist. Start by identifying five people in your organisation who are curious, trusted, and practical. Give them tools, time, and permission. They'll do more for your AI adoption in three months than a year of top-down strategy.