Skip to main content

The Knowledge Management Revolution

Enterprise knowledge management is being reinvented by AI. What's changing, what to build, and why most knowledge bases are about to become obsolete.
8 September 2024·9 min read
Mak Khan
Mak Khan
Chief AI Officer
Dr Josiah Koh
Dr Josiah Koh
Education & AI Innovation
Enterprise knowledge management has been broken for 20 years. Wikis nobody reads. SharePoint sites nobody can navigate. Confluence spaces where information goes to die. AI is about to change this, not by building better wikis, but by making the wiki irrelevant.

What You Need to Know

  • Traditional knowledge management fails because it requires humans to organise and retrieve information. Humans are bad at both. AI is good at both.
  • The shift is from "store and search" to "ask and answer." Users shouldn't need to know where information lives. They should be able to ask a question and get a grounded, cited answer.
  • The underlying technology is RAG, but the transformation is organisational. When people can actually find and use institutional knowledge, every process that depends on that knowledge improves.
  • The ROI is measurable within weeks, not months. Time-to-answer for common questions drops by 70-80%. Onboarding accelerates. Decision quality improves because decisions are informed by institutional knowledge instead of individual memory.
70-80%
reduction in time-to-answer for common knowledge queries when moving from traditional search to AI-powered retrieval
Source: RIVER and Josiah Koh, enterprise engagement data, 2024

Why Knowledge Management Has Been Broken

Josiah put it bluntly in a recent engagement: "Your knowledge management system isn't a system. It's a graveyard."
He's right. And it's not because the tools are bad. It's because the fundamental model is wrong.
Traditional knowledge management assumes three things:
  1. Someone will organise the knowledge. Create the taxonomy, file the documents, maintain the structure. In practice, this person either doesn't exist or gives up within six months.
  2. Someone will update the knowledge. Policies change. Processes evolve. People leave. The documentation should reflect reality. It rarely does after the first year.
  3. Someone will find the knowledge. Navigate the folder structure, use the right search terms, know which version is current. In practice, people ask the person sitting next to them. Or they guess.
This model has failed in every enterprise we've worked with. Not partially failed. Comprehensively failed. The knowledge exists, scattered across SharePoint, Confluence, shared drives, email, and people's heads. But it's not accessible. And if knowledge isn't accessible, it's not knowledge. It's data.

What AI Changes

From Organising to Ingesting

AI-powered knowledge systems don't require someone to organise documents into a taxonomy. They ingest documents as they are: PDFs, Word docs, web pages, emails, presentations. The structure is created by the embedding model, not by a human librarian.
This is a fundamental shift. It means the knowledge system can be populated in days, not months. And it stays populated because new documents are ingested automatically as they appear.

From Searching to Asking

Traditional search requires users to guess the right keywords. "What's our leave policy for employees on parental leave returning part-time?" in a traditional search engine, you'd need to find the right document (HR Policy Manual), navigate to the right section (Section 4: Leave), and read the relevant subsection. Assuming the search engine surfaces the right document at all.
In an AI-powered system, you ask the question. The system retrieves the relevant sections from the policy manual, synthesises an answer, and cites the source. If the answer spans multiple documents, the system handles the synthesis.
The user doesn't need to know where the information lives. They just need to ask.

From Static to Living

Traditional knowledge bases are snapshots. They reflect the state of knowledge at the time someone last updated them. AI-powered systems can be continuously refreshed as source documents change.
When a policy is updated, the new version is re-ingested. The old version is archived. Questions about the policy automatically reflect the current version. No human curation required.
The best knowledge management system is the one nobody has to manage. Stop expecting humans to be librarians.
Mak Khan
Chief AI Officer

What to Build

The Core System

Document ingestion pipeline. Connectors to SharePoint, Confluence, shared drives, email archives, and any other system where documents live. Automated processing: OCR for scanned documents, text extraction for PDFs, parsing for structured formats.
Vector store. Embedded representations of all ingested content, indexed for fast retrieval. We typically use pgvector for enterprise deployments, for reasons we've covered in detail previously.
Retrieval and generation. A RAG pipeline that takes user questions, retrieves relevant content, and generates grounded answers with citations. Every answer should link back to its source documents.
Access control. Critical for enterprise deployment. Users should only see answers derived from documents they have permission to access. This means the vector store needs to respect the source system's access controls.

The Features That Drive Adoption

Josiah's experience with enterprise tool adoption shaped our approach here. The core system is necessary. These features are what drive actual usage:
Confidence indicators. Show users how confident the system is in its answer. "High confidence: answer derived from current policy document" versus "Lower confidence: answer synthesised from multiple sources, some of which may be outdated." Transparency builds trust.
Source links. Every answer links to the source documents. Users can verify. They can go deeper. They can see the context around the extracted information. This is not optional.
Feedback loops. Users can flag incorrect answers, outdated sources, or missing information. This feedback improves the system and, critically, tells you which documents need updating.
Usage analytics. What are people asking about? Which documents are most frequently retrieved? Where does the system fail to find answers? This data is gold for understanding organisational knowledge gaps.

The Organisational Impact

Onboarding

New staff in knowledge-heavy roles (legal, compliance, operations, consulting) spend weeks or months building institutional knowledge. An AI-powered knowledge system compresses this. Day one, they can ask questions and get grounded answers. They still need to develop judgement, but the factual foundation is accessible immediately.

Decision Quality

Decisions informed by institutional knowledge are better than decisions informed by individual memory. When a claims assessor can instantly retrieve precedent decisions, relevant policy sections, and procedural guidelines, their assessment is more consistent and more defensible.

Knowledge Preservation

When experienced staff leave, their knowledge goes with them. An AI-powered system that has ingested the documents they created and referenced preserves the institutional knowledge, even if the tacit knowledge is lost.

Process Efficiency

Every process that involves someone looking something up, every process that stalls because someone doesn't know the answer, every process that produces inconsistent results because different people work from different information, improves when knowledge is accessible.

Implementation Timeline

Weeks 1-3: Document audit and connector setup. Identify all knowledge sources, build ingestion pipelines, process initial document corpus.
Weeks 4-6: Core RAG system. Vector store, retrieval pipeline, generation with citations, basic UI.
Weeks 7-9: Access control and testing. Implement permission-aware retrieval. Test with real users on real questions. Tune retrieval quality.
Weeks 10-12: Deploy and iterate. Launch to initial user group. Collect feedback. Improve retrieval and generation quality based on real usage.
Ongoing: Continuous ingestion of new documents. Monitoring of retrieval quality. Expansion to additional user groups and document sources.

The Bottom Line

Knowledge management is one of those enterprise problems that everyone acknowledges and nobody solves. AI doesn't solve it by building a better tool for organising information. It solves it by removing the need to organise information in the first place.
That's not luck. That's not a marginal improvement. That's a fundamental change in how enterprises access what they know. And the organisations that build this capability now will compound its value across every process that depends on knowledge. Which is most of them.