Skip to main content

Why Executives Resist AI (And What to Do About It)

It's not just frontline teams that resist AI. Executive resistance is quieter, harder to detect, and more damaging to adoption.
12 June 2024·7 min read
Tim Hatherley-Greene
Tim Hatherley-Greene
Chief Operating Officer
We talk a lot about frontline resistance to AI. The claims processor who won't use the new system. The analyst who quietly reverts to spreadsheets. But the resistance that damages AI adoption most is happening two levels up, in the executive suite, where it looks like caution and sounds like questions.

What You Need to Know

  • Executive resistance to AI is common but rarely acknowledged because it presents as "reasonable caution" rather than overt opposition
  • The primary drivers are loss of decisional authority, accountability anxiety, and competence gaps around AI
  • Executive resistance creates a permission vacuum: teams won't adopt AI if leadership doesn't visibly champion it
  • Addressing executive resistance requires different tactics than frontline resistance: peer examples, controlled exposure, and reframing AI as a leadership tool

The Quiet Resistance

Executives don't say "I'm against AI." They say things like:
  • "We need to be careful about this."
  • "Let's see more data before we commit."
  • "I want to make sure we're not moving too fast."
  • "What are the risks we haven't considered?"
These are legitimate questions. But when they keep coming, month after month, without any corresponding decision to move forward, they're functioning as resistance. The team hears the questions and interprets them correctly: leadership isn't ready. And if leadership isn't ready, the team won't move either.
43%
of C-suite leaders express concern about AI's impact on their decision-making authority
Source: Harvard Business Review Analytic Services, 2023

What's Actually Driving It

Loss of Decisional Authority

Executives are decision-makers. That's their organisational identity. AI systems that recommend decisions, flag exceptions, or prioritise actions shift some of that authority from human judgement to algorithmic output.
An executive who's built their career on reading situations, weighing options, and making calls now has a system that does some of that work. The rational response is to question the system's reliability. The emotional response is to feel diminished by it.
This isn't ego. It's a reasonable concern about accountability. If the AI recommends a course of action and it goes wrong, who's responsible? The executive who followed the recommendation? The system that generated it? This accountability ambiguity creates hesitation.

Competence Gaps

Most executives don't understand how AI works. Not in technical detail, that's not their job. But they don't have a working mental model for what AI can and can't do, how confident its outputs are, or how to evaluate its recommendations.
This creates a vulnerability they're not used to. In their domain, they can evaluate proposals, assess risks, and challenge assumptions because they have deep expertise. With AI, they're dependent on technical teams to explain what the system is doing and why. That dependence is uncomfortable for people who are used to being the most knowledgeable person in the room.

Reputational Risk

Executives are personally associated with the decisions they make. An AI initiative that fails publicly, that produces biased outputs, that makes the news for the wrong reasons, reflects on them. The downside risk feels asymmetric: a successful AI deployment is a team achievement, but a failed one is a leadership failure.
Executives aren't afraid of AI. They're afraid of being responsible for something they don't fully understand. And that's actually a reasonable fear.
Tim Hatherley-Greene
Chief Operating Officer

What to Do About It

Peer Examples Over Internal Pitches

The most effective way to shift executive attitudes isn't more presentations from the AI team. It's hearing from peers at other organisations who've done it successfully.
When a fellow CEO describes their experience with AI, including the mistakes and the course corrections, it normalises the journey. "If they can do it, we can do it." This peer influence is significantly more powerful than any internal business case.
Facilitate these conversations. Industry events, advisory boards, organised site visits, or even a 30-minute video call with a counterpart at a non-competing organisation. Make it easy for your executives to hear from people they relate to.

Controlled Exposure

Don't ask executives to champion something they haven't experienced. Create safe, low-stakes opportunities for executives to interact with AI directly.
Not a demo where a technical person shows the system. A hands-on session where the executive uses it for something relevant to their own work. "Here's how AI can summarise the board papers you read every month. Try it."
The goal isn't to make them AI experts. It's to replace abstract anxiety with concrete experience. Once they've used AI for something useful, the conversation shifts from "is this safe?" to "what else can it do?"

Reframe AI as a Leadership Amplifier

Executives respond to framing that positions AI as augmenting their capability, not replacing their judgement.
"AI processes the data and surfaces the patterns. You make the decision." This framing preserves the executive's role as decision-maker while positioning AI as a tool that makes their decisions better informed.
Concretely: show them how AI can help with tasks they actually do. Board report preparation. Competitive intelligence synthesis. Risk pattern identification. Customer sentiment analysis. When AI helps them do their job better, resistance dissolves because the threat has been reframed as a tool.

Address Accountability Directly

Don't let accountability ambiguity linger. Establish clear principles early:
  • AI recommends. Humans decide. The decision-maker remains accountable.
  • AI outputs are inputs to judgement, not substitutes for it.
  • Governance frameworks define when AI output needs human review and when it can be trusted.
These principles should be documented and approved by the executive team. The act of defining accountability reduces the anxiety about it.

Build Confidence Gradually

Don't ask executives to bet the organisation on AI. Ask them to approve a single, bounded experiment. When it works, approve the next one. Confidence builds through evidence, not arguments.
The timeline is slower than the AI team would like. That's okay. Executive confidence that's built gradually on evidence is more durable than confidence that's built on enthusiasm and breaks at the first problem.

Executive resistance is the hidden bottleneck in enterprise AI adoption. It's not malicious. It's not irrational. It's a natural response from people who are being asked to champion something they don't fully understand, with their reputation on the line. Address it with empathy, evidence, and peer influence, and you'll unlock the permission your teams need to move forward.