I need to write this down while it's still fresh, because I'm not sure I trust my own reaction yet. OpenAI released ChatGPT on November 30th. It's now December 5th and I've spent more time talking to it than I've spent on any single thing this week, including actual work.
I should be clear about where I'm coming from. I've been following AI for years. I wrote about GPT-3 back in 2020 and my take was cautious: impressive technology, unclear application, lots of hype. I wrote about Stable Diffusion three months ago with a similar conclusion. Interesting but not ready.
ChatGPT is different. I don't fully understand why yet, but it is.
What Happened
I created an account on Thursday morning out of professional curiosity. I expected to spend twenty minutes testing it, note some observations, and move on. Three hours later, I was still going. Not because I was testing it systematically, but because I kept thinking of another thing to try.
I asked it to explain our integration architecture challenges to a non-technical board member. It produced something I could actually use. I asked it to draft a project scope based on rough notes. It wasn't perfect but it was a plausible first draft. I asked it to write a Python script to process some data I'd been putting off. The script worked on the third attempt after I corrected some assumptions.
Then I started asking it harder things. I described a complex delivery problem we're having with a client and asked for suggestions. Some were generic, but two were genuinely useful reframings I hadn't considered.
Why This Feels Different
GPT-3 was impressive in demos. You'd see a carefully crafted prompt produce remarkable output and think "that's clever." ChatGPT is impressive in conversation. You talk to it like a person, with context and follow-ups and corrections, and it keeps up.
That sounds like a small difference. It's not. The conversational interface makes it feel less like using a tool and more like talking to a very knowledgeable colleague who happens to be available at 11pm on a Sunday and never gets tired or annoyed by your questions.
I'm not saying it's intelligent. It gets things wrong. It's confidently incorrect about things I know well, which makes me nervous about things I don't know well. It can't do maths reliably. It has no sense of what's true vs what's plausible. These are real limitations.
But the baseline capability, the sheer range of things it can do competently, is a step change from anything I've used before.
What I Don't Know
Here's what's honestly going through my head.
I don't know if this is a novelty effect. Maybe I'm impressed because it's new. Maybe in six months it'll feel like a slightly better search engine.
I don't know if the limitations are fixable. The confidently-wrong problem is a big one. For any use case where accuracy matters, which is most enterprise use cases, you'd need someone checking everything it produces. That limits the efficiency gain.
I don't know what the business model looks like. It's free right now, which means it's subsidised. The compute costs for running this must be enormous. What happens when they start charging?
I don't know what this means for the work we do. That's the question that kept me up last night. We build enterprise software. We write code, design interfaces, manage integrations, handle data. How much of what we do could this technology eventually handle? Not today. But in two years? Five?
I don't have answers.
I've been in tech for over a decade and I've learned to be sceptical of "this changes everything" moments. But sitting with ChatGPT at midnight, having it help me think through a genuine business problem, I had a feeling I haven't had in a long time: I have absolutely no idea where this goes.
Isaac Rolfe
Managing Director
What I Am Sure Of
A few things I'm confident about even this early:
This is not a gimmick. The underlying capability is real. Whether ChatGPT specifically succeeds is almost beside the point. The research behind it is advancing at a rate that guarantees something like this will exist and improve.
The interface matters. GPT-3 was available for two years before ChatGPT launched. The technology was similar. The chat interface is what made it accessible. That's a lesson about technology adoption: capability alone isn't enough. The experience has to be right.
The questions are more important than the answers. Right now, everyone is posting impressive ChatGPT outputs on Twitter. The more important conversation is about what it means for how we work, learn, create, and verify information. That conversation hasn't started yet.
I'm going to spend more time with this. I'll try to be more systematic about testing it against actual work we do. I'll write something more analytical once I've had time to think properly.
Right now, I'm just being honest about my reaction: I'm surprised, I'm excited, and I'm uncertain in a way that feels important.
