We've sat in enough board conversations about AI investment to see the pattern. The stated reasons for investing (or not investing) in AI are always rational: ROI projections, competitive positioning, efficiency gains, risk mitigation. But the actual decision-making process is driven by forces that no business case captures. Gerson's behavioural science lens has helped me name what I've been observing for two years: enterprise AI investment is a psychological decision dressed in financial language.
What You Need to Know
- Enterprise AI investment decisions are influenced by at least four cognitive biases: anchoring to peer behaviour, loss aversion, status quo bias, and sunk cost commitment
- Organisations that invest in AI because competitors did (anchoring) often choose poorly because they're solving for anxiety, not for a specific business problem
- Loss aversion drives more AI investment than opportunity recognition. "We can't afford to fall behind" is a more powerful motivator than "here's what we could gain"
- Understanding these biases doesn't eliminate them, but it does improve the quality of investment decisions
The Four Biases
Anchoring to Peers
The most common trigger for enterprise AI investment is not an internal business case. It is hearing that a competitor or peer organisation has invested. This anchoring effect shapes not just the decision to invest but also the scale of investment. "They spent $2 million, so we should spend at least that" is anchoring, not strategy.
The problem is not that peer behaviour is irrelevant. It is that it bypasses the question that should come first: what specific problem are we solving, and is AI the right solution?
Leaders anchor to reference points, often competitor behaviour, and then construct rational justifications around the anchor. The business case follows the decision, not the other way around.
Dr Gerson Tuazon
AI Strategy & Health Innovation
Loss Aversion
Kahneman and Tversky's foundational insight applies directly to AI investment: losses loom larger than gains. "We'll fall behind competitors" is psychologically more powerful than "we'll gain competitive advantage." Both statements describe the same situation, but the loss frame drives faster, larger, and often less considered investment.
Loss aversion also explains why underperforming AI initiatives get additional funding instead of being discontinued. The invested capital feels like a loss, and committing more resources feels like a path to recovering that loss. This is the sunk cost fallacy wearing a loss aversion disguise.
Status Quo Bias
For organisations that have not invested in AI, status quo bias provides comfortable inertia. "Our current processes work." "We'll wait until the technology matures." "We're not a technology company." These are rationalised preferences for the current state, not strategic assessments.
Status quo bias is strongest in organisations with recent successful technology investments. "We just finished our cloud migration, we don't need another disruption" treats AI as an interruption to stability rather than an evolution of capability.
Sunk Cost Commitment
Once invested, organisations exhibit commitment escalation. A pilot that hasn't delivered expected results gets a second phase because stopping would "waste" the first investment. This pattern, well-documented in Gerson's field, explains why many enterprises have multiple underperforming AI initiatives that nobody will discontinue.
Better Decision Architecture
Separate the Trigger From the Decision
If the trigger for AI investment was a competitor announcement, a conference presentation, or an executive's enthusiasm, name it. Then set it aside and ask: what problem would AI solve in our specific context? If you can't name a specific problem with measurable outcomes, you're investing in anxiety reduction, not capability building.
Frame Both Ways
Present every AI investment case in both gain and loss frames. "This investment could improve processing speed by 30%" and "Not making this investment means our processing speed stays where it is while competitors improve." Leaders who see both frames make more balanced decisions than those who see only one.
Pre-Commit to Evaluation Criteria
Before approving an AI investment, define the criteria for continuing, expanding, or discontinuing. "If the pilot doesn't demonstrate X by month six, we stop." Pre-commitment reduces sunk cost escalation because the discontinuation decision was made before the emotional attachment to the project formed.
Seek Disconfirming Evidence
Actively look for reasons the AI investment might not work. Not to kill the project, but to improve it. Organisations that only seek confirming evidence (vendor references, optimistic case studies, enthusiastic internal champions) make worse investment decisions than those that deliberately seek out failure cases.
None of this means AI investment is irrational. It means the decision-making process around AI investment is subject to the same cognitive biases that affect all human decisions. Naming the biases doesn't eliminate them, but it creates the conditions for more deliberate choices. And deliberate choices tend to produce better outcomes than reactive ones.

