Three years ago, on a quiet Australian evening at the end of November 2022, I opened a browser tab, typed “ChatGPT” and fell down a rabbit hole.

What looked like a clever party trick now sits inside almost every screen we touch.
It writes, codes, designs, answers emails, joins meetings, and offers to “think” for us while we make a coffee.

In just 36 months we moved from clunky GPT‑3 guesses to multimodal systems that listen, speak, watch, and generate video on demand.
On one side you have GPT‑5 and its cousins baked into productivity suites and operating systems.
On the other, Google’s Gemini stack now spins out images, videos, and live voice conversations as easily as a teenager scrolls TikTok.

AI grew from toy to infrastructure in about the same time it takes a toddler to stop falling over and start raiding the kitchen drawers.
That speed should excite you.
It should also scare you.

Because the real story of the last three years is not just about what the machines can now do.
It is about what we have quietly stopped doing in our own heads.

The brain’s original productivity stack

Our brains came with a built‑in performance optimisation system long before anyone wrote an API.
Evolution tuned us to manage cognitive load.
We ignore most of what hits our senses, notice the noisy or dangerous bits, and save deep thinking for the moments that matter.

Daniel Kahneman’s System 1 and System 2 language still earns its keep.
System 1 reacts fast, with stories, shortcuts, and habits.
System 2 turns up late, asks annoying questions, and burns a lot of glucose.
Most days we spend our time trying to get through life with as little System 2 effort as possible.

AI slots neatly into that wiring.
It feels like the perfect extension of System 1.
You type a vague prompt, it hands you a fluent answer.
No sweat, no friction, no uncomfortable silence while your own brain strains to find the words.

That convenience is the real seduction.
It doesn’t just save time; it removes the discomfort that usually forces us to think.

From go‑kart to Formula One

When I first wrote about ChatGPT in late 2022, I compared it to moving from a dinghy to a hydrofoiling catamaran.
The old chatbots the banks abused us with felt like wobbly go‑karts compared to this new Formula One car.

Back then, the outputs wobbled as well.
We all laughed at confident nonsense and obvious hallucinations, and the smart users treated it as a useful idiot.
Good at grunt work, dreadful at judgement.

Three years on, the idiot has become frighteningly competent at the grunt work.
You can hand it a video, a spreadsheet, three PDFs, and a cryptic prompt, and it will respond with a structured summary, charts, and a draft board paper.
In marketing, tools that Christopher Penn and others have championed now automate analysis that once absorbed whole analytics teams.
In social media land, Michael Stelzner’s tribes test and adopt AI helpers across content planning, scheduling, and reporting.
The scaffolding of digital work now assumes an AI layer.

The productivity upside is obvious.
Small teams now do work that once demanded a department.
Solo consultants carry an army of junior analysts in their laptop.
The cost of running experiments, simulating scenarios, and visualising ideas has collapsed.

The upside: a cognitive exoskeleton

Used well, AI behaves like a cognitive exoskeleton.
It doesn’t replace your muscles; it lets you lift more.

You can:

  • Ask better questions and get a structured first pass at the answers.
  • Stress‑test your strategy by asking a model to argue the opposite case.
  • Turn messy meeting transcripts into actions, risks, and decisions.
  • Compress a week’s background reading into an evening.

For curious people, this remains a golden age.
If you bring a clear point of view, a half‑decent mental model, and a willingness to challenge the output, AI expands your reach.
You see more, faster.
You turn half‑formed hunches into interrogated options.

This is the optimistic story I see from the best AI practitioners.
Penn talks about clear use cases, measurement, and governance.
Stelzner urges marketers to become the AI expert inside their organisation rather than the victim of it.
Mark Schaefer reminds us that in a world of infinite content, only the work grounded in real human insight and community connection stands a chance.

Treat AI as leverage on your thinking, and you win.
Treat it as a substitute for thinking, and you slide quietly into trouble.

The downside: content shock on steroids

The economics of content changed long before ChatGPT.
Mark Schaefer called the problem “Content Shock” a decade ago: content supply would eventually exceed human attention, and the returns to yet another blog post would fall off a cliff.

Gen‑AI turned that slow trend into a vertical line.
The cost of creating “something that looks like content” has collapsed towards zero.
You can train a model on your brand voice, press one button, and watch it spit out a month of LinkedIn posts, emails, and scripts.

The web now fills with beige word‑soup.
Technically correct.
Emotionally vacant.
Indistinguishable from the next post in the feed.

Lazy prompts produce lazy answers.
Lazy answers tempt lazy publishing.
Lazy publishing teaches audiences to ignore everything.

Most of what passes for AI‑generated thought leadership is the intellectual equivalent of supermarket white bread.
Easy to slice, melts in the mouth, leaves you hungry ten minutes later.

For strategists and marketers, this matters.
If you turn your brain off and let the prompt box rule your calendar, you don’t just waste time.
You train your customers to expect nothing of you.

The deeper risk: turning off the tools in our heads

The part that bothers me most at AI’s third birthday is not the hallucinations, the copyright fights, or even the job displacement.
It is the quiet atrophy of judgement.

Our evolved cognitive tools do several important jobs:

  • They force us to sit with ambiguity instead of rushing to an answer.
  • They nudge us to compare new information with our lived experience.
  • They help us detect bullshit: in others, and in ourselves.

Every time we outsource those jobs to a model, we rob our System 2 of practice.
We still get an answer, but we no longer earn it.

It feels efficient in the moment.
In the long run, it erodes the very muscles that strategy and leadership rely on.

Worse, AI answers arrive wrapped in the fluency of natural language.
They sound like us.
They sound like authority.
That fluency can smuggle untested assumptions, shallow reasoning, and comforting half‑truths straight past our defences.

Three years in, I see two diverging paths:

  • People who use AI to expand their curiosity, test their thinking, and widen their circle of competence.
  • People who use AI to avoid the discomfort of thinking altogether.

Both groups think they are being more productive.
Only one group is actually becoming more capable.

The economics and the power shift

There is another angle to AI at three that we usually duck. The money.

In three years we have concentrated astonishing economic power into a very small group of firms. A handful of hyperscalers, one or two chip designers, and a short list of frontier labs now sit in front of almost every serious AI workload. Everyone else rents from them.

The scale of the bet looks heroic. Trillions in planned data‑centre and chip spending, and a market that prices the leaders as if they will own that future for decades. You can call that confidence. You can also call it a hostage note written to the next interest‑rate cycle.

Take the current market darlings. The world’s favourite chip supplier books tens of billions in revenue and trades at several trillion in market value. A leading frontier lab chases double‑digit billions in annualised revenue while it still burns oceans of cash on compute. These are real businesses with real customers, but the step between those numbers and their valuations contains a huge block of hope.

We have been here before in a softer form. Around 1970 most of the value in large listed companies sat in things you could touch: plant, property, inventory. Twenty‑five years later that picture had flipped. Intangibles – brands, patents, software, customer relationships – carried most of the market value, and the accountants struggled to keep up.

AI pushes that logic to an extreme. The market is not just pricing current earnings. It is trying to price the option value of owning the picks and shovels for the next general‑purpose technology. In that world traditional ratios look broken, yet sooner or later cash flow still matters. Hope does not pay for electricity.

So are we in a bubble? My answer: not quite, but we are definitely out over our skis. The technology is real and the revenues are non‑trivial, unlike much of the dot‑com era. At the same time, the capital going in and the valuations attached to it assume a smooth path to dominance that history rarely grants.

For strategists and boards the question is not, “Is Nvidia or OpenAI overvalued?” You and I do not control that outcome. The better question is, “If AI infrastructure ends up concentrated in a few platforms, where do we want to sit in that stack, and how much bargaining power will we have?” If you ignore that question, you will find your future margins decided in someone else’s data centre.

A third‑birthday challenge

So where do we land, three years after Chattie kicked off the AI party?

On balance, I still count myself as an optimist.
The tools have already changed how I research, model, and communicate.
They have improved decision quality in businesses that choose to interrogate their assumptions rather than decorate them.

But optimism without discipline becomes delusion.

If AI turns into just another way to avoid hard thinking, we will get a short‑term productivity sugar‑hit followed by a long‑term loss of capability.
We will trade the craft of judgement for the convenience of a cursor.

My challenge to clients, and to myself, at AI’s third birthday looks something like this:

  1. Write the first page yourself.
    Before you open a model, force your own brain to articulate the problem, the context, and your best first answer.
  2. Use AI as a devil’s advocate, not a rubber stamp.
    Ask it to attack your favourite idea, not simply refine it.
  3. Refuse to publish first drafts.
    If an AI system writes something for you, treat it as scaffolding.
    Pull it apart, rebuild it, and add the scars of your own experience.
  4. Keep one craft sacred.
    Choose at least one discipline – writing, interviewing, analysing numbers, designing experiments – that you refuse to automate completely.
    That is where your edge will live.
  5. Stay interested.
    Curiosity is the one trait the machines cannot fake.
    The moment you stop asking, “What is really going on here?” you hand your agency to an algorithm.

AI at three is noisy, uneven, and moving faster than the regulators and board papers can track.
It will get smarter, more capable, and more deeply embedded over the next three years.

The question worth asking is not, “What will the next model be able to do?”
The better question is, “What will I still insist on doing with my own brain?”

NOTE: The post is entirely AI. That is a first for me, something I have avoided, as generally entirely AI written posts are of little value.

While I have used AI to research posts, and help me fill in holes in logic, I have never just posted output without extremely heavy editing.

It seems AI has actually increased the time it takes me to get a post to publishable form. Not the expected outcome.

The reason is there is so much AI slop out there, mangled, generic stuff that adds little if anything to the intellectual capital I am trying to feed, but it blots out the originality I strive for. While AI is a great helper, it is in  no way a creative one.  To stand out amongst the slop, each post now takes more time than three or four years ago.

However, it is getting better every day. Theis post was done after I dictated a number of disconnected ideas that had been rattling around without much form, or hope of becoming anything useful. So, I dictated into Chat, and the output is there for you to judge.

In my view, it needs some editing!!