Your board is talking about LLMs. Your vendors are selling LLMs. Your team is using LLMs. Here's what they actually are, what they can actually do, and what they actually can't - explained for business leaders, not computer scientists.
What Is a Large Language Model?
An LLM is a type of artificial intelligence that has been trained on enormous amounts of text to predict what comes next in a sequence of words. That sounds simple. The results are not.
By learning statistical patterns across billions of documents - books, websites, code, academic papers, conversations - these models develop something that looks like understanding. They can write, summarise, translate, reason, code, and converse. ChatGPT is an LLM with a chat interface bolted on. GPT-4 is a more capable LLM. Claude (by Anthropic) is another.
The key insight for business leaders: LLMs don't "know" things in the way humans do. They predict plausible next words based on patterns. This makes them extraordinarily capable at generating human-quality text and extraordinarily unreliable at being factually correct.
How They Work (60-Second Version)
- Training: Feed the model billions of documents. It learns statistical relationships between words, concepts, and structures. This takes months and costs millions of dollars.
- Inference: Give the model a prompt (input text). It predicts the most likely continuation, word by word. This is what happens when you use ChatGPT.
- Fine-tuning: Optionally, train the model further on specific data (legal documents, medical records, your company's knowledge) to improve performance on specific tasks.
That's it. Everything you see - the writing, the coding, the analysis, the conversation - is sophisticated pattern prediction.
What LLMs Can Do
Text generation. Draft emails, reports, proposals, documentation. Quality ranges from "needs editing" to "surprisingly good" depending on the task.
Summarisation. Condense long documents into key points. Genuinely useful for enterprise - think board papers, research reports, regulatory filings.
Classification. Sort documents, emails, support tickets into categories. Fast and accurate for well-defined categories.
Translation. Between human languages, but also between formats. Unstructured text to structured data. Natural language to code. Technical jargon to plain English.
Reasoning. Work through multi-step problems. Follow logical chains. Evaluate arguments. GPT-4 scores in the top 10% on the bar exam - not because it "understands" law, but because legal reasoning has linguistic patterns it can predict.
Code generation. Write, explain, debug, and refactor code. This is one of the strongest practical applications for enterprise tech teams right now.
What LLMs Cannot Do
Be reliably accurate. They hallucinate. They invent facts, citations, statistics. They present fabrications with the same confidence as truth. This is the fundamental limitation for enterprise use.
Access real-time information. LLMs know what was in their training data. They don't know what happened yesterday unless you tell them.
Access your data. Out of the box, an LLM knows nothing about your organisation. It can't read your documents, access your databases, or understand your processes. Making it useful for your specific business requires integration work.
Replace domain expertise. An LLM can draft a legal brief. It cannot practice law. It can summarise a medical report. It cannot diagnose. The output requires expert review, always.
Guarantee consistency. Ask the same question twice, get different answers. For enterprise systems that need deterministic outputs, this is a design challenge.
175B
parameters in GPT-3. GPT-4's parameter count is undisclosed but estimated at over 1 trillion
Source: OpenAI, GPT-3 Paper, 2020; industry estimates, 2023
The Enterprise Implications
For strategy: LLMs are a platform shift, not a product. Like mobile, like cloud, the question isn't whether to adopt them - it's how and when.
For operations: The first use cases are augmentation, not replacement. Humans reviewing AI outputs. AI handling the routine so humans can focus on the complex. This is where enterprise value starts.
For IT: LLMs introduce new infrastructure requirements. API management, data pipelines, governance frameworks, cost management. Your IT team needs to start understanding these even if you're not building yet.
For risk: Hallucination, data privacy, vendor lock-in, and regulatory uncertainty are all real concerns. None of them are reasons to wait. All of them are reasons to proceed carefully.
What You Should Do
- Understand the technology. You don't need to build models. You do need to understand what they can and can't do. This article is a start.
- Experiment. Let your team use ChatGPT (with guardrails). Let them discover what's useful and what's not. Bottom-up experimentation reveals use cases that top-down strategy misses.
- Think about data. Your competitive advantage in AI won't come from the model. It'll come from your data. Start thinking about how to make your organisational knowledge accessible.
- Don't panic. The technology is moving fast but enterprise adoption takes time. You have time to be thoughtful. You don't have time to be inactive.
