Skip to main content

The Three Literacies at Scale

Two years after introducing the three literacies model. What happened when we rolled it out across client teams, and what we learned about AI fluency.
5 August 2025·8 min read
Isaac Rolfe
Isaac Rolfe
Managing Director
In 2023, we introduced a model for AI literacy built on three levels: conceptual literacy (understanding what AI is and is not), practical literacy (using AI tools effectively), and strategic literacy (knowing how AI changes your business). Two years later, we have rolled this model out across dozens of teams. The model holds up. The assumptions behind it needed updating.

The Original Model

The three literacies framework was straightforward. Enterprise AI adoption fails when people cannot operate at the right literacy level for their role:
Conceptual literacy is knowing what AI can do, what it cannot do, and roughly how it works. Every person in an organisation needs this. Without it, you get either fear (AI will take my job) or magical thinking (AI will solve everything).
Practical literacy is the ability to use AI tools effectively. Prompt design, output evaluation, workflow integration. The people who will use AI daily need this.
Strategic literacy is understanding how AI changes business models, competitive dynamics, and organisational design. Leaders need this. Without it, AI investments are tactical and fragmented.
The model was simple. The reality of deploying it at scale was not.

What We Got Right

The Levels Are Real

The distinction between the three levels holds up across every organisation we have worked with. The failure modes map precisely to literacy gaps:
  • Teams with low conceptual literacy resist AI adoption or misuse it through misunderstanding
  • Teams with low practical literacy adopt AI enthusiastically but ineffectively
  • Organisations with low strategic literacy build AI capabilities that do not connect to business outcomes
We can now predict, with reasonable accuracy, which organisations will succeed with AI based on their literacy profile. The correlation is strong enough that we assess literacy as part of every discovery engagement.

Sequence Matters

Conceptual before practical. Practical before strategic. This sequence, which we recommended from the start, has been validated repeatedly. Teams that jump straight to practical literacy without conceptual foundations hit problems faster: they over-trust AI outputs, misunderstand limitations, and generate more cleanup work than value.
Organisations that jump straight to strategic literacy without practical foundations make poor investment decisions: they fund AI initiatives based on what AI could theoretically do, not what AI can actually do in their specific context.
3.5x
higher AI adoption rate in teams that followed the conceptual-practical-strategic sequence vs teams that started with practical training
Source: RIVER, literacy programme outcome data, 2023-2025

What We Got Wrong

Practical Literacy Is Harder Than We Thought

We underestimated how difficult practical AI literacy is to develop. Using AI tools effectively is not like learning a new software application. It is closer to learning a new mode of thinking.
The skills involved, framing ambiguous problems as clear prompts, evaluating probabilistic outputs, knowing when to trust and when to verify, are genuinely difficult cognitive skills. They take weeks to develop, not hours. And they require practice with real work, not exercises.
Our initial training programmes allocated too little time to practical literacy. We now allocate 3-4x more, and the results are proportionally better.

Strategic Literacy Has a Prerequisite

We assumed that leaders could develop strategic AI literacy through briefings, case studies, and executive education. That works for surface understanding. Deep strategic literacy, the kind that produces good AI investment decisions, requires practical experience.
Leaders who have personally used AI tools, even for simple tasks, make dramatically better strategic decisions about AI investments than leaders who have only read about it or seen demos. The embodied understanding of what AI feels like to use, where it surprises you and where it disappoints, provides intuition that no briefing can replicate.
We now require executives in our strategic literacy programmes to complete a condensed practical literacy module first. The pushback is predictable. The results are not debatable.

Literacy Decays Without Practice

This was the biggest miss. Literacy is not a permanent state. It is a skill that atrophies without use. Teams that completed our literacy programme and then did not use AI regularly for 2-3 months needed significant refreshment.
This has implications for how organisations think about AI training. It is not a one-time investment. It is an ongoing capability that needs maintenance, similar to how professional certifications require continuing education.

The Updated Model

Based on two years of data, here is how the model has evolved:

Conceptual Literacy (Foundation)

Unchanged in structure, updated in content. The specific misconceptions we address have shifted. In 2023, the primary misconception was "AI will take my job." In 2025, it is "AI can do anything if you prompt it right." The over-trust problem has replaced the fear problem in most organisations.

Practical Literacy (Fluency)

Expanded significantly. We now treat practical literacy as a fluency spectrum, not a binary. Entry-level fluency (can use AI for simple generation tasks) is achievable in 1-2 weeks. Working fluency (can use AI effectively for most daily tasks) takes 4-6 weeks. Advanced fluency (can design AI-assisted workflows and evaluate AI for novel use cases) takes 3-6 months.

Strategic Literacy (Leadership)

Now has a practical prerequisite and a continuous learning component. Strategic literacy is not a workshop. It is an ongoing practice of connecting AI capability to business strategy, informed by personal practical experience and updated as the technology evolves.

Organisational Literacy (New)

The fourth level we did not see in 2023. Organisational literacy is the collective ability of an organisation to adopt, govern, and evolve AI capabilities. It encompasses culture, processes, governance, and infrastructure. An organisation with high organisational literacy can adopt new AI capabilities faster because the institutional readiness exists.
This is not the sum of individual literacies. It is an emergent property of how the organisation supports AI adoption across teams, how it governs AI use, and how it learns from AI deployments.
4
levels of AI literacy: conceptual, practical, strategic, and organisational (added in 2025)

Measuring Literacy

One of the most requested outputs from our literacy work is a way to measure it. After extensive iteration, we use a combination of:
Self-assessment surveys that measure confidence and reported usage. These are directionally useful but systematically biased toward overestimation.
Task-based assessments where participants complete real AI-assisted tasks under observation. These measure actual capability but are time-intensive.
Usage analytics from AI tools that show frequency, breadth, and sophistication of use over time. These are the most objective measure but miss quality.
No single measure is sufficient. The combination provides a reliable literacy profile that we use to target training investments where they will have the most impact.

The three literacies model was a starting point, not a final answer. Two years of deployment have refined it into something more nuanced, more practical, and more honest about how hard AI fluency is to develop and sustain. The core insight remains: AI adoption is a literacy problem, not a technology problem. The update is that literacy is harder, more layered, and more perishable than we initially thought.