Skip to main content

Is AI Making Your Team Anxious?

AI anxiety is now a recognised source of workplace stress, distinct from regular work pressure. Employees worry about job security, ethical use, and whether they can keep up. Most leadership teams are not addressing it.
18 November 2025·5 min read
Tim Hatherley-Greene
Tim Hatherley-Greene
Chief Operating Officer
Dr Tania Wolfgramm
Dr Tania Wolfgramm
Chief Research Officer
Your team is worried about AI. Not in the way they're worried about a deadline or a difficult client. A different kind of worry. The kind that sits underneath everything else and doesn't go away when the project ships. Researchers now classify it as its own category: AI anxiety. And most leadership teams are doing nothing about it.

Three Distinct Fears

AI anxiety clusters around three concerns, each requiring a different response.
Job security. "Will AI replace me?" This is the loudest fear and the one leaders most often dismiss with vague reassurances. Employees hear "AI will augment, not replace" and translate it to "they're going to replace us and they haven't figured out the timeline yet." Platitudes make it worse.
Ethical complicity. "Am I helping build something harmful?" This one surprises leadership teams who assume everyone shares their enthusiasm. Some employees genuinely wrestle with whether the AI systems they're building or using are fair, biased, or damaging. Dismissing this as resistance misses the point entirely.
Capability anxiety. "Can I keep up?" The pace of change in AI tooling means that skills learned six months ago feel outdated. Employees who were confident in their expertise now wonder whether their knowledge is becoming irrelevant. This is particularly acute for mid-career professionals who built their identity around deep domain knowledge.
20%
higher productivity in organisations that integrate wellbeing into AI leadership strategy
Source: Deloitte Human Capital Trends, 2025

Quiet Burnout

The visible symptoms of AI anxiety are easy to spot: resistance to new tools, negative comments in meetings, disengagement from AI-related projects. The invisible symptoms are harder.
Quiet burnout shows up as presenteeism with declining quality. Employees appear engaged. They attend the meetings. They use the tools. But their energy is drained by a persistent low-level stress they don't feel safe naming. Output quality drops gradually, not dramatically enough to trigger a conversation.
Some organisations are starting to detect this through communication pattern analysis. Cisco and Adidas have deployed AI tools that gauge employee sentiment from email and messaging patterns, looking for shifts in tone, response times, and collaboration frequency. The ethics of using AI to measure the anxiety caused by AI is its own conversation.

The Analogue Counter-Trend

An unexpected response to AI anxiety is gaining traction: deliberate analogue activity. Employees are intentionally removing apps, picking up tactile hobbies, and treating non-digital activities as active meditation rather than leisure.
Woodworking. Ceramics. Gardening. Cooking from recipes in physical books.
This isn't Luddism. The same people using AI tools at work are choosing analogue experiences outside work as a way to restore cognitive balance. The appeal is direct sensory feedback, something you made with your hands that exists in physical space and didn't require a prompt.

What Leadership Can Actually Do

Name it. Acknowledging AI anxiety as a real and reasonable response removes the shame of feeling it. A team that can talk about uncertainty openly handles it better than one that pretends everything is fine.
Be specific about impact. "AI won't replace you" is meaningless. "We're using AI to automate invoice processing, which will free the finance team to focus on analysis and advisory work" is actionable. People can adapt to specific changes. They can't adapt to vague threats.
Invest in learning, not just tools. Rolling out AI tools without investing equivalent energy in training and support sends a clear message: the technology matters more than the people using it. Structured learning time, peer support, and permission to experiment without performance pressure all reduce capability anxiety.
Build governance visibly. Employees worried about ethical complicity need to see that their organisation takes AI governance seriously. An ethics framework that lives in a document nobody reads doesn't help. Visible governance, regular reviews, clear escalation paths, and genuine willingness to pause or stop if something isn't right, does.
The organisations getting AI adoption right are the ones that treat it as a people challenge first and a technology challenge second. If your team is anxious, no amount of tooling will deliver the outcomes you're expecting.
Tim Hatherley-Greene
Chief Operating Officer
3
distinct types of AI anxiety identified by researchers: job security, ethical complicity, and capability
Source: APA Technology and Wellbeing Survey, 2025
The productivity data is clear: teams that feel supported through AI transitions outperform teams that don't. The question for leadership isn't whether to address AI anxiety. The question is whether you're already too late.