Skip to main content

AI Adoption Is a People Problem

78% of CHROs say roles need to change for AI. 63% of organisations cite human factors as their primary AI challenge. The technology works. The adoption doesn't.
2 April 2026·9 min read
Tim Hatherley-Greene
Tim Hatherley-Greene
Chief Operating Officer
AI demos are magic. AI in production is a people problem. The technology works brilliantly in controlled environments, but the moment you hand it to a team of 50 people with existing workflows, habits, and concerns, adoption stalls. The gap between "this is amazing" and "I use this every day" is almost never technical.

What You Need to Know

  • The primary barrier to enterprise AI adoption is human factors, not technology. 63% of organisations report people challenges as their top obstacle.
  • 78% of CHROs agree that workflows and roles need to fundamentally change for AI to deliver value, but most organisations haven't started that work.
  • AI skills have a half-life of 3-4 months. Training once and walking away guarantees failure.
  • Support functions like Legal, HR, and Risk are the most frequent blockers to AI rollout, not because they're wrong, but because they're excluded from the conversation too late.
  • The first 30 days of any AI rollout determine whether it becomes a daily tool or expensive shelfware.
78%
of CHROs agree workflows and roles need to change for AI
Source: Gartner, CHRO Change Management Trends, March 2026
63%
of organisations cite human factors as their primary AI challenge
Source: HBR, Where Senior Leaders Are Struggling with AI Adoption, February 2026
38%
of knowledge workers now use gen AI daily, up from 11% in 2024
Source: McKinsey, The State of AI in 2025

The Demo-to-Daily Gap

Here's what I've seen play out dozens of times. You run a proof of concept. The team loves it. Leadership signs off. You deploy it to the wider organisation. And then... nothing much happens.
The demo worked because it was controlled. One use case, a curated dataset, an enthusiastic early adopter driving it. Daily use is different. Daily use means someone at 4pm on a Thursday, already behind on their workload, deciding whether to use the new AI tool or stick with the process they've used for years. That's not a technology decision. That's a human one.
The organisations that close this gap don't do it with better models or fancier interfaces. They do it by understanding what makes people change how they work, and then designing for that reality.

The Three Adoption Killers

1. The support function bottleneck

Legal wants to review the data implications. HR needs to understand the workforce impact. Risk wants to assess the exposure. These are all reasonable requests, and they'll add months to your timeline if you bring them in at the end.
Support functions are the most frequent blockers to AI adoption, not because they're obstructive, but because they're often the last to hear about it. By the time Legal sees the AI initiative, the team has already built expectations around a timeline that didn't account for compliance review.
The fix is simple: bring Legal, HR, and Risk in at the start. Not to approve, but to co-design. When they shape the guardrails early, they become enablers instead of blockers.

2. The training half-life

AI skills have a half-life of roughly 3-4 months. The model updates. The interface changes. New capabilities appear. The training you delivered in January is outdated by April. And yet, 38% of adoption challenges stem from insufficient training, which usually means training that happened once and was never refreshed.
Continuous learning isn't a nice-to-have. It's the difference between a tool people use and a tool people used to use. Build learning into the workflow itself: short, contextual, ongoing. Not a two-hour workshop that everyone forgets.

3. Change fatigue

Your AI rollout isn't happening in isolation. It's landing on top of the last three platform migrations, the new CRM, the restructure, and whatever's coming next quarter. People have a finite capacity for change, and AI often arrives when that capacity is already stretched.
This means you can't treat AI adoption as "one more thing." You need to be honest about the change load your team is carrying and sequence accordingly. Sometimes the smartest move is to wait two months so the rollout lands when people actually have the bandwidth to engage with it.

What Actually Works

Start with the process people hate most

Don't start with the most strategically important use case. Start with the one that makes people groan. The weekly report nobody enjoys writing. The data entry that eats three hours every Monday. The compliance checklist that's mind-numbing.
When AI removes a genuine pain point, adoption takes care of itself. People don't need to be convinced to use a tool that eliminates work they already resent. That early win builds confidence, and confidence is contagious. It pushes people up and over the adoption curve faster than any training programme.

Involve the team early

The teams that adopt AI fastest are the ones who helped choose it. Co-design beats top-down mandates every time. When people have a say in how AI fits into their workflow, they take ownership of making it work. When it's imposed on them, they find reasons it won't.
This doesn't mean design by committee. It means finding three or four people from the target team, giving them early access, and letting them shape how the tool integrates with their actual work. Their feedback will be better than any consultant's, and their advocacy will be more credible than any executive sponsor's.

Measure adoption, not just accuracy

Most AI programmes measure model accuracy, response quality, and processing speed. These matter, but they're not the metrics that predict success. What you really need to know is: are people using it? How often? For what? And when they stop, why?
Track daily active usage, not just logins. Track task completion through the AI workflow, not just queries submitted. And track the drop-off points, because that's where the friction lives. A 95% accurate model with 20% adoption delivers far less value than an 85% accurate model that 80% of the team uses every day.

The First 30 Days Matter Most

The pattern is remarkably consistent. If people aren't using the AI tool regularly within the first 30 days, they probably never will. Old habits reassert themselves quickly, and the window for establishing new ones is narrow.
Week 1: Launch with a specific, pain-solving use case. Not a general capability, a specific task that the team will do this week.
Week 2: Gather feedback aggressively. What's working? What's frustrating? Fix the friction points immediately, even if they're small. Responsiveness in this window signals that leadership takes adoption seriously.
Week 3: Expand to the second use case, ideally one that the team suggested during the first two weeks. This builds momentum and reinforces that their input matters.
Week 4: Share early wins publicly. Concrete examples of time saved, quality improved, frustration eliminated. Stories from peers are worth more than metrics from dashboards.

What to Do

  1. Audit your change load before you launch. Map every active change initiative your team is managing. If they're already saturated, delay the AI rollout until you've created space. Launching into change fatigue is the fastest way to create shelfware.
  2. Bring support functions in on day one, not day ninety. Schedule a joint kickoff with Legal, HR, and Risk before you write a single line of requirements. Give them a seat at the design table and you'll move faster, not slower.
  3. Build a 30-day adoption plan with weekly feedback loops. Plan the first four weeks in detail: which use case, which team, what support, how you'll measure. Then commit to acting on feedback within 48 hours. The technology is the easy part. Getting 50 people to change how they work? That's the real challenge, and it deserves a real plan.