We build AI systems for enterprises. We have a financial interest in AI adoption. And yet the psychology of how people experience technological change tells us that both dominant narratives are wrong. "AI will take your job" overstates the threat. "AI will not affect your job" ignores the evidence entirely. The honest answer sits in the space between, where people actually live and work and worry. Here is our actual position.
The Uncomfortable Middle
The two dominant narratives about AI and work are both wrong.
The alarmist narrative says AI will eliminate millions of jobs. Mass unemployment. Societal collapse. This sells headlines and conference tickets, but the evidence does not support it. Previous waves of automation (ATMs, self-checkout, factory robotics) displaced specific tasks within roles far more than they eliminated roles entirely.
The reassurance narrative says AI will only create jobs, make everyone more productive, and we will all be fine. This is what AI companies are incentivised to say, and it is also not quite true. Some roles will shrink. Some tasks will disappear. Some people will need to transition, and transitions are hard.
The honest answer lives in the middle, and nobody likes the middle because it does not fit a headline.
What We Actually See
Across our enterprise AI deployments, here is what we observe:
Tasks change more than roles do. An insurance claims assessor who spent 60% of their time on data entry and document review now spends 20% on those tasks. The role did not disappear. It changed. The assessor now spends more time on complex cases, client communication, and quality review of AI outputs.
Efficiency gains do not always mean headcount reduction. Several of our clients used AI-driven efficiency to handle growing workloads without hiring, rather than to reduce existing teams. The same number of people do more work, often more interesting work.
Some roles genuinely shrink. Data entry roles, basic document processing roles, and tier-one customer service roles are being reduced. Not eliminated, but reduced. Organisations that previously needed ten people for a task now need six. This is real, and pretending otherwise does not help.
New roles emerge, but not one-for-one. AI creates demand for AI engineers, evaluation specialists, and AI operations roles. But these roles require different skills than the roles that shrink. The person whose data entry role was reduced is not going to become an AI engineer without significant retraining.
14%
of workers globally have experienced job displacement due to AI, while 40% report significant changes to their daily tasks
Source: OECD Employment Outlook, 2024
The Transition Is the Problem
The issue is not the endpoint. In five to ten years, most people will work alongside AI tools and their roles will be different and, in many cases, better. The issue is the transition.
Retraining is not free. Learning to work effectively with AI takes time, support, and resources. Organisations that deploy AI without investing in retraining are making a bet that their people will figure it out on their own. Some will. Some will not.
The burden falls unevenly. AI disproportionately affects routine cognitive tasks. The roles most affected tend to be middle-skill, middle-income positions. Senior roles that involve judgement, relationships, and strategy are less affected. Entry-level roles that involve physical work are less affected. The middle gets squeezed.
Speed matters. Gradual change is manageable. Rapid change is not. If AI transforms a role over five years, people can adapt. If it transforms a role in twelve months, many cannot. The speed of AI capability improvement suggests the transformation may be faster than previous technology waves.
What Responsible Organisations Should Do
We advise our clients on this because we believe it matters, and because ignoring it creates real business risk. Disengaged, anxious employees do not adopt AI effectively.
Be honest about the impact. Do not promise that nobody's role will change. Do not promise that AI will only make things better. Be specific about which tasks will change, which roles will evolve, and what support you will provide during the transition.
Invest in retraining before deployment. Start capability building before you deploy AI, not after. Give people time to develop new skills while their existing skills are still relevant.
Create transition pathways. For roles that will shrink, what are the adjacent roles people can move into? What skills do they need? How will you support the transition? This needs to be specific and funded, not vague promises.
Measure the human impact. Track not just AI performance and cost savings, but employee engagement, stress levels, and skill development. If your AI deployment is technically successful but your team is miserable, you have not succeeded.
Slow down if necessary. If the organisation is not ready for the change, slow the deployment. A six-month delay that preserves team health is better than a rapid deployment that destroys trust.
Our Own Position
We are an AI company. We build AI systems. We benefit from AI adoption. And we believe that building AI responsibly includes being honest about its effects on work.
We will not sell AI to a client without discussing the workforce impact. We will not pretend that AI is purely additive. We will not ignore the transition costs.
This is not altruism. It is pragmatism. AI deployments that ignore the human impact fail. Not because the technology fails, but because the people resist, disengage, or leave. The organisations that manage the transition well get better AI outcomes than the ones that pretend there is no transition to manage.
The honest take is this: AI will change work significantly. Most people will be fine if the transition is managed well. Many will not be fine if the transition is ignored. The difference is a choice organisations make, not an inevitable outcome of the technology.
