AI Is Still a Hot Mess. That's Not a Reason to Panic.
We've been here before. The technology is different. The hype is not.
I spent part of my early career in tech marketing at the now-notorious Nortel Networks, back when the telecom boom was in full swing and everyone was absolutely certain that the future was happening right now, all at once, and you were either on the train or you were left behind. I also did some communications consulting with Adobe in the days when they were buying up companies like Macromedia and seeking world domination in their domain. Over the years I have watched a lot of technologies get overhyped, under-delivered, and eventually become genuinely useful.
So when I watch the current conversation about AI oscillate between existential dread and breathless evangelism, I feel something I can only describe as “déjà vu with better graphics.”
But the problem isn’t inherently AI, it’s that we’re treating this technology like a finished product. It is spectacularly not a finished product.
Let’s talk about maturity
In the corporate tech world, there’s a framework called the Capability Maturity Model, or CMM. It was originally developed to assess software development processes, but it applies broadly to how any technology evolves from chaos to capability. There are five levels:
Initial — chaotic, ad hoc, unpredictable. It works sometimes. You’re not sure why.
Repeatable — some basic processes exist. You can reproduce results, sort of.
Defined — processes are documented, standardized, and understood.
Managed — you can measure and control what’s happening.
Optimizing — continuous improvement. This is where mature, reliable technology lives.
Here’s my honest read on where AI sits right now: we are solidly in Level 1, with a tentative toe in Level 2. We are nowhere near Defined, and Managed and Optimizing are not even on the horizon yet.
And yet the conversation happening in boardrooms, on LinkedIn, and in the media treats AI as though it’s operating at Level 4, maybe 5. As though the work is done. As though the form it takes today is the form it will always take.
We are making permanent, sweeping decisions about a technology that is still, by any honest measure, figuring itself out.
The confidence problem
Here’s the thing about AI that still genuinely astounds me, even after using it daily: it can be wrong with complete confidence.
And this isn’t a bug, exactly. It’s a feature of how the system works. AI doesn’t reason the way you and I reason. It doesn’t draw on logic or lived experience. It operates on probabilities by predicting the most statistically likely next word, next sentence, and next idea, based on an enormous amount of training data. Much of the time, that probability engine produces something genuinely useful. More often than we’d like, it produces something that sounds completely authoritative and is completely, factually wrong.
This is called a hallucination, and it happens because the model isn’t checking its output against reality. It’s pattern-matching at scale. It doesn’t know what it doesn’t know.
I’ve also spent more time than I’d like to admit fixing and refining AI outputs. Sometimes the gap between what it produces and what I actually need is significant enough that I wonder if I should have just done it myself.
And yet.
The surprise part
Just yesterday I pulled a handful of recipe headlines from an article about anti-inflammatory foods (that I need to be eating to help manage my autoimmune disease). I dropped the headlines into Claude and asked it to build full recipes from just the titles, create a meal plan, and generate a grocery list. What came back wasn’t just a list. It was a fully formatted, interactive website. It was organized, functional, and beautiful and contained even more useful information than I expected. Here it is if you want to check it out.
All this, from just a few headlines.
That’s the paradox of immature technology. It can floor you and frustrate you in the same afternoon. It’s still in the chaotic “Initial” phase of the maturity model, which means the highs are high and the lows are real, and neither one tells you the whole story. It’s a trough.
About that trough
If you’re familiar with the Gartner Hype Cycle, you’ll know about the Trough of Disillusionment. It’s that phase after the initial frenzy, when the reality gap starts to show and the backlash sets in. I think a lot of thoughtful people are in or approaching that trough right now. I’ve had my own moments there.
I don’t think that’s a bad thing. In fact, I think it might be exactly where we need to be. The trough is where we stop performing enthusiastically and start doing the actual work of understanding.
What actually gets us from chaos to capable
Yes, we need governance. Yes, we need regulation. Those are non-negotiables, and they’re overdue. But the piece I keep coming back to is the one that I think matters most right now, at this specific moment in the maturity curve. What we really need right now, more than anything, is organizational literacy.
And I mean at all levels, simultaneously. The executive who doesn’t know where to start. The late-career leader who’s waiting for it to go away. The mid-career professional who is quietly terrified of becoming irrelevant.
Because here’s what I’ve learned from watching technology mature over a 35-year career: the tools don’t save us. Understanding the tools does.
When we actually understand what AI is—that it’s just a probability engine, not an oracle; it's a powerful assistant, not a replacement for judgment; that it’s a first draft, not a final answer—we stop being either afraid of it or blindly deferential to it. We start using it well. We catch the hallucinations. We ask better questions. We know when to trust it and when to push back.
That’s not a technology problem. That’s an education problem. And it’s solvable.
The honest truth
AI is far from complete. I believe we are barely at the MVP (Minimum Viable Product) at this point. The version you’re using today is far from the final form. The fears and the hype are both outpacing the reality, and the reality is genuinely interesting if you’re willing to engage with it honestly.
We’ve been here before. New technology arrives, we collectively lose our minds about it, and then slowly (and often painfully), we figure out what it’s actually for and how to use it well.
That process is underway, but it’s going to take time, and that’s okay.
The goal isn’t to decide how you feel about AI. The goal is to understand it well enough to use it on your own terms. That’s where the real power is. And that’s available to everyone, not just the early adopters and the true believers.
You don’t have to love it. You don’t have to fear it. You just have to understand it.
We’re only in chapter one. There’s a lot of book left.

