Skip to main content

Psychological Safety in AI Adoption

Teams that feel safe to experiment with AI adopt it faster. Teams that don't, resist. Here's what psychological safety looks like in the context of enterprise AI.
15 May 2024·6 min read
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
Dr Gerson Tuazon
Dr Gerson Tuazon
AI Strategy & Health Innovation
The most cited reason for slow AI adoption in enterprises is "resistance to change." But resistance isn't a root cause. It's a symptom. The root cause, in most organisations we've studied, is a lack of psychological safety: people don't feel safe to experiment, make mistakes, or admit they're struggling with something new.

What You Need to Know

  • Psychological safety (Edmondson, 1999) is the belief that one won't be punished for mistakes, questions, or admitting uncertainty
  • It's the strongest predictor of team learning behaviour, which makes it the strongest predictor of AI adoption speed
  • Organisations can measure and improve psychological safety with targeted interventions
  • The research shows: teams with high psychological safety adopt new technologies 2-3x faster
2.5x
faster technology adoption in teams with high psychological safety
Source: Edmondson, A. - The Fearless Organization, 2018
76%
of employees say fear of looking incompetent prevents them from trying new tools
Source: Gallup Workplace Survey, 2023

Why Psychological Safety Matters for AI

AI adoption requires people to be beginners again. Experienced professionals need to learn new tools, new workflows, and new ways of thinking about their work. This learning process involves mistakes, confusion, and temporary incompetence.
In a psychologically safe environment, these are normal parts of learning. In an unsafe environment, they're career risks. The experienced claims processor who's been the team expert for fifteen years won't experiment with AI if trying it means looking incompetent in front of their manager and colleagues.
My research on perspective-taking in leadership shows that the capacity to adopt another's viewpoint is directly linked to creating safety. When leaders can genuinely see the AI transition from their team's perspective, they design environments where experimentation is encouraged. When they can't, they inadvertently create environments where compliance replaces curiosity.
Dr Gerson Tuazon
AI Strategy & Health Innovation

The Safety Signals

Psychological safety isn't a policy. It's an atmosphere created by specific leadership behaviours:

Signals That Build Safety

  • Leaders use AI visibly and discuss their own learning curve. "I tried using AI for the board report this week. Here's what worked and what didn't."
  • Mistakes are discussed as learning, not failures. "What did we learn from that?" not "Who was responsible?"
  • Questions about AI are welcomed. "That's a good question" is a safety signal. "You should know this by now" is a threat signal.
  • Experimentation time is protected. If the only time people can try AI is outside their normal workload, the signal is "this isn't important enough for work time."
  • Scepticism is legitimate. Teams where everyone must be enthusiastic about AI aren't psychologically safe. They're performatively aligned.

Signals That Destroy Safety

  • Public comparison of adoption rates. League tables of who's using AI most create competition, not safety.
  • Mandates without support. "Everyone must use AI by Q3" without corresponding time, training, and permission to struggle.
  • Punishment for low usage. Including subtle punishment like exclusion from opportunities, reduced autonomy, or pointed questions in reviews.
  • Dismissing concerns. "AI is the future, get on board" dismisses legitimate anxiety and signals that this organisation values compliance over honesty.

The Research Framework

Evaluation of AI adoption must include the human experience, not just the technical metrics. If we're evaluating whether AI is "working," we need to evaluate whether the conditions exist for people to use it authentically, not just whether the system is deployed. The Pou Marama framework asks us to evaluate through values, including the value of human dignity in the face of technological change.
Dr Tania Wolfgramm
Chief Research Officer
Edmondson's research identifies four components of psychological safety:
  1. Interpersonal risk tolerance: Can I ask a question without being seen as ignorant?
  2. Openness to vulnerability: Can I admit I don't understand something?
  3. Inclusiveness: Are all perspectives valued, or only the technical ones?
  4. Learning orientation: Does this team treat challenges as learning opportunities?
Each component maps directly to AI adoption behaviours:
Safety ComponentAI Adoption Behaviour
Risk toleranceWillingness to try AI on a real task
Openness to vulnerabilityAsking for help when the AI output doesn't make sense
InclusivenessNon-technical team members contributing to AI design
Learning orientationSharing failed experiments as useful information

Building Safety for AI Adoption

For Team Leaders

  1. Model the learning curve. Use AI yourself. Share what you're learning, including what's hard.
  2. Normalise struggle. "AI is new for everyone. I expect mistakes. I expect questions. Both are welcome."
  3. Create safe practice spaces. Dedicated time where the team can experiment with AI on non-critical tasks, without productivity pressure.
  4. Ask before telling. "What's been your experience with the AI tool?" before "Here's what you should be doing with AI."
  5. Respond to scepticism with curiosity. "What concerns you about it?" not "Trust the system."

For Organisations

  1. Measure psychological safety. Use validated instruments (Edmondson's 7-item scale) as part of AI readiness assessments.
  2. Include safety in change management. Safety-building activities alongside training, not instead of it.
  3. Train managers in safety behaviours. Most managers don't know what psychological safety is, let alone how to create it. Brief, practical training makes a measurable difference.
  4. Protect experimentation from performance metrics. During the transition period, don't penalise reduced productivity. It's the cost of learning.

Psychological safety isn't soft. It's measurable, buildable, and directly predictive of AI adoption speed. The organisations that invest in creating safe learning environments will adopt AI faster and more sustainably than those that rely on mandates, training programmes, and executive pressure.