Cockroach subsidies: Why Australia pays multinationals to stay

Cockroach subsidies: Why Australia pays multinationals to stay

 

Federal and state governments now face a steady queue of large, tax advantaged Multinational corporations with a simple message: “Subsidise us, or we shut the gates.”

Jamie Dimon, CEO of JP Morgan recently said at an earnings call: “When you see one cockroach, there are probably more.”

We now see the same thing with corporate subsidies.

Once one bailout appears, a small army of “essential” projects scuttles out from behind the skirting board.

Think about a few recent examples.

Whyalla Liberty Steel receives a multi‑billion dollar rescue package.

Glencore secures support for its Mount Isa zinc smelter and Townsville refinery.

Nyrstar’s lead‑zinc smelter attracts funding.

Arnott’s receives a 45 million grant to ‘shore up their balance sheet’

On top of that you have the fuel tax credit scheme running at around ten billion a year, and a series of Petroleum Resource Rent Tax concessions.

Not every one of these choices fails a hard‑headed test. Some, probably many, will stack up when you count jobs, regional impact, supply chain risks and national sovereignty. However, that does not diminish the simple fact that the only ‘policy’ we have is to be selectively tactical in our response. Little integrated, coherent policy aligned with the long term best interests of the country, that has bi-partisan support.

The problem sits with the ongoing failure of the adversarial nature of our political system, and successive governments to provide a stable and reliable long term investment environment.

Taken together the tactical responses do not look like strategy, but they do look like frantic pest control in a kitchen nobody bothered to design properly.

The cockroaches are running wild, demanding sustenance.

There is a common thread.

Most calls for subsidies exploit the absence of a coherent energy policy, and restrictive, time consuming approval processes, combined with a small domestic market.

Governments then reach for subsidies to keep often extremely wealthy, tax‑advantaged multinationals from walking away with their capital, seeking the best risk adjusted returns elsewhere.

It pits national governments against one another in a global options game, that filters down to regional governments.

In contrast to our ad hoc playbook, China has played a long and highly strategic game with subsidies. For example, they have spent years locking down global supply of rare earth minerals, and Chinese firms now dominate large parts of the EV supply chain. The same playbook has been applied to batteries, solar panels, and increasingly AI.

It is a giant international poker game, and we are a minor player with a few good cards if played well.

We supply resources, are stable politically and economically (despite the problems) and have an educated workforce. However, we have shallow and short term oriented capital markets, so need investment to leverage our natural assets, while rabbiting on about sovereign capability.

For Australian governments to attract mobile capital on sensible terms, we need a different offer.

Subsidies and favourable tax treatment can play a role, but they do not carry the game when they are subject to management by press release, and the loading of investment in marginal seats.

Serious investors look for something more valuable: reliable educated workers, technical capabilities, and reliable institutions, all of which contribute to the certainty that encourages investment.

The strategic dilemma is that competitive countries have a different set of foundational assumptions that deliver competitive advantage.

On one side sit the cheques written to keep multinational operations in place.

On the other side sit the losses in productive capacity, skilled jobs, capability building, and tax revenue if those operations close.

Do our governments, bureaucracies, and political culture have the capability and courage to wrestle with that complexity?

Because until they do, the cockroach subsidies will keep multiplying under the fridge.

 

 

 

 

Is AI forcing orchestration to replace delegation?

Is AI forcing orchestration to replace delegation?

 

 

AI is stripping out the commercial friction that previously required middle management as coordinators.

The old vertical model, with layers of functions passing work up and down the organisational pyramid, is being replaced by horizontal flows of cross-functional orchestration.

Traditional organisations run on vertical alignment. Each function optimises its own sequence of tasks, reporting neatly up the line. It looks tidy on a chart but in reality can be chaotic.  Customers don’t live in your vertical world. They move sideways, across sales, production, logistics, and service, expecting a seamless experience.

AI is flipping that organisational pyramid on its side. It can connect once-isolated functions into a single horizontal process. What was once delegated up and down now needs to be orchestrated across.

Sequential processes, the bread and butter of functional work, are predictable. They’re easy to automate and improve. But the processes that serve customers aren’t sequential. They are coordinated, and they demand awareness of what’s happening across functions, not just within them.

This difference matters. Sequential work relies on delegation. Coordinated work requires orchestration. The first is mechanical; the second is more like music.

To orchestrate effectively, AI needs agency. It must be allowed to make choices within parameters, not just follow a script. Without that, automation collapses into the same bottlenecks middle management used to create while claiming to fix them. True orchestration demands that machines can choose the next note when the music changes.

This is gold for the cost hawks and process zealots who love squeezing inefficiency from sequential work. It is also gold for the customer-facing teams, because orchestration delivers something far more valuable: speed. When everything else is roughly equal, price, specification, guarantees, two things decide who wins.

  • Delivered In Full, On Time, what was promised when it was promised, without error.
  • Cycle time, how fast an order moves from request to fulfilment.

Do both better than the competition and you are operating inside their OODA loop, seeing, deciding, and acting faster than they can react.

AI will not just make work faster. It will force organisations to decide whether they develop and trust their AI systems more than existing their manual processes. That is not a technical question, it’s cultural: and it is coming faster than most hierarchies can flatten.

 

 

AI at Three: Are We Still Thinking for Ourselves?

AI at Three: Are We Still Thinking for Ourselves?

 

 

Three years ago, on a quiet Australian evening at the end of November 2022, I opened a browser tab, typed “ChatGPT” and fell down a rabbit hole.

What looked like a clever party trick now sits inside almost every screen we touch.
It writes, codes, designs, answers emails, joins meetings, and offers to “think” for us while we make a coffee.

In just 36 months we moved from clunky GPT‑3 guesses to multimodal systems that listen, speak, watch, and generate video on demand.
On one side you have GPT‑5 and its cousins baked into productivity suites and operating systems.
On the other, Google’s Gemini stack now spins out images, videos, and live voice conversations as easily as a teenager scrolls TikTok.

AI grew from toy to infrastructure in about the same time it takes a toddler to stop falling over and start raiding the kitchen drawers.
That speed should excite you.
It should also scare you.

Because the real story of the last three years is not just about what the machines can now do.
It is about what we have quietly stopped doing in our own heads.

The brain’s original productivity stack

Our brains came with a built‑in performance optimisation system long before anyone wrote an API.
Evolution tuned us to manage cognitive load.
We ignore most of what hits our senses, notice the noisy or dangerous bits, and save deep thinking for the moments that matter.

Daniel Kahneman’s System 1 and System 2 language still earns its keep.
System 1 reacts fast, with stories, shortcuts, and habits.
System 2 turns up late, asks annoying questions, and burns a lot of glucose.
Most days we spend our time trying to get through life with as little System 2 effort as possible.

AI slots neatly into that wiring.
It feels like the perfect extension of System 1.
You type a vague prompt, it hands you a fluent answer.
No sweat, no friction, no uncomfortable silence while your own brain strains to find the words.

That convenience is the real seduction.
It doesn’t just save time; it removes the discomfort that usually forces us to think.

From go‑kart to Formula One

When I first wrote about ChatGPT in late 2022, I compared it to moving from a dinghy to a hydrofoiling catamaran.
The old chatbots the banks abused us with felt like wobbly go‑karts compared to this new Formula One car.

Back then, the outputs wobbled as well.
We all laughed at confident nonsense and obvious hallucinations, and the smart users treated it as a useful idiot.
Good at grunt work, dreadful at judgement.

Three years on, the idiot has become frighteningly competent at the grunt work.
You can hand it a video, a spreadsheet, three PDFs, and a cryptic prompt, and it will respond with a structured summary, charts, and a draft board paper.
In marketing, tools that Christopher Penn and others have championed now automate analysis that once absorbed whole analytics teams.
In social media land, Michael Stelzner’s tribes test and adopt AI helpers across content planning, scheduling, and reporting.
The scaffolding of digital work now assumes an AI layer.

The productivity upside is obvious.
Small teams now do work that once demanded a department.
Solo consultants carry an army of junior analysts in their laptop.
The cost of running experiments, simulating scenarios, and visualising ideas has collapsed.

The upside: a cognitive exoskeleton

Used well, AI behaves like a cognitive exoskeleton.
It doesn’t replace your muscles; it lets you lift more.

You can:

  • Ask better questions and get a structured first pass at the answers.
  • Stress‑test your strategy by asking a model to argue the opposite case.
  • Turn messy meeting transcripts into actions, risks, and decisions.
  • Compress a week’s background reading into an evening.

For curious people, this remains a golden age.
If you bring a clear point of view, a half‑decent mental model, and a willingness to challenge the output, AI expands your reach.
You see more, faster.
You turn half‑formed hunches into interrogated options.

This is the optimistic story I see from the best AI practitioners.
Penn talks about clear use cases, measurement, and governance.
Stelzner urges marketers to become the AI expert inside their organisation rather than the victim of it.
Mark Schaefer reminds us that in a world of infinite content, only the work grounded in real human insight and community connection stands a chance.

Treat AI as leverage on your thinking, and you win.
Treat it as a substitute for thinking, and you slide quietly into trouble.

The downside: content shock on steroids

The economics of content changed long before ChatGPT.
Mark Schaefer called the problem “Content Shock” a decade ago: content supply would eventually exceed human attention, and the returns to yet another blog post would fall off a cliff.

Gen‑AI turned that slow trend into a vertical line.
The cost of creating “something that looks like content” has collapsed towards zero.
You can train a model on your brand voice, press one button, and watch it spit out a month of LinkedIn posts, emails, and scripts.

The web now fills with beige word‑soup.
Technically correct.
Emotionally vacant.
Indistinguishable from the next post in the feed.

Lazy prompts produce lazy answers.
Lazy answers tempt lazy publishing.
Lazy publishing teaches audiences to ignore everything.

Most of what passes for AI‑generated thought leadership is the intellectual equivalent of supermarket white bread.
Easy to slice, melts in the mouth, leaves you hungry ten minutes later.

For strategists and marketers, this matters.
If you turn your brain off and let the prompt box rule your calendar, you don’t just waste time.
You train your customers to expect nothing of you.

The deeper risk: turning off the tools in our heads

The part that bothers me most at AI’s third birthday is not the hallucinations, the copyright fights, or even the job displacement.
It is the quiet atrophy of judgement.

Our evolved cognitive tools do several important jobs:

  • They force us to sit with ambiguity instead of rushing to an answer.
  • They nudge us to compare new information with our lived experience.
  • They help us detect bullshit: in others, and in ourselves.

Every time we outsource those jobs to a model, we rob our System 2 of practice.
We still get an answer, but we no longer earn it.

It feels efficient in the moment.
In the long run, it erodes the very muscles that strategy and leadership rely on.

Worse, AI answers arrive wrapped in the fluency of natural language.
They sound like us.
They sound like authority.
That fluency can smuggle untested assumptions, shallow reasoning, and comforting half‑truths straight past our defences.

Three years in, I see two diverging paths:

  • People who use AI to expand their curiosity, test their thinking, and widen their circle of competence.
  • People who use AI to avoid the discomfort of thinking altogether.

Both groups think they are being more productive.
Only one group is actually becoming more capable.

The economics and the power shift

There is another angle to AI at three that we usually duck. The money.

In three years we have concentrated astonishing economic power into a very small group of firms. A handful of hyperscalers, one or two chip designers, and a short list of frontier labs now sit in front of almost every serious AI workload. Everyone else rents from them.

The scale of the bet looks heroic. Trillions in planned data‑centre and chip spending, and a market that prices the leaders as if they will own that future for decades. You can call that confidence. You can also call it a hostage note written to the next interest‑rate cycle.

Take the current market darlings. The world’s favourite chip supplier books tens of billions in revenue and trades at several trillion in market value. A leading frontier lab chases double‑digit billions in annualised revenue while it still burns oceans of cash on compute. These are real businesses with real customers, but the step between those numbers and their valuations contains a huge block of hope.

We have been here before in a softer form. Around 1970 most of the value in large listed companies sat in things you could touch: plant, property, inventory. Twenty‑five years later that picture had flipped. Intangibles – brands, patents, software, customer relationships – carried most of the market value, and the accountants struggled to keep up.

AI pushes that logic to an extreme. The market is not just pricing current earnings. It is trying to price the option value of owning the picks and shovels for the next general‑purpose technology. In that world traditional ratios look broken, yet sooner or later cash flow still matters. Hope does not pay for electricity.

So are we in a bubble? My answer: not quite, but we are definitely out over our skis. The technology is real and the revenues are non‑trivial, unlike much of the dot‑com era. At the same time, the capital going in and the valuations attached to it assume a smooth path to dominance that history rarely grants.

For strategists and boards the question is not, “Is Nvidia or OpenAI overvalued?” You and I do not control that outcome. The better question is, “If AI infrastructure ends up concentrated in a few platforms, where do we want to sit in that stack, and how much bargaining power will we have?” If you ignore that question, you will find your future margins decided in someone else’s data centre.

A third‑birthday challenge

So where do we land, three years after Chattie kicked off the AI party?

On balance, I still count myself as an optimist.
The tools have already changed how I research, model, and communicate.
They have improved decision quality in businesses that choose to interrogate their assumptions rather than decorate them.

But optimism without discipline becomes delusion.

If AI turns into just another way to avoid hard thinking, we will get a short‑term productivity sugar‑hit followed by a long‑term loss of capability.
We will trade the craft of judgement for the convenience of a cursor.

My challenge to clients, and to myself, at AI’s third birthday looks something like this:

  1. Write the first page yourself.
    Before you open a model, force your own brain to articulate the problem, the context, and your best first answer.
  2. Use AI as a devil’s advocate, not a rubber stamp.
    Ask it to attack your favourite idea, not simply refine it.
  3. Refuse to publish first drafts.
    If an AI system writes something for you, treat it as scaffolding.
    Pull it apart, rebuild it, and add the scars of your own experience.
  4. Keep one craft sacred.
    Choose at least one discipline – writing, interviewing, analysing numbers, designing experiments – that you refuse to automate completely.
    That is where your edge will live.
  5. Stay interested.
    Curiosity is the one trait the machines cannot fake.
    The moment you stop asking, “What is really going on here?” you hand your agency to an algorithm.

AI at three is noisy, uneven, and moving faster than the regulators and board papers can track.
It will get smarter, more capable, and more deeply embedded over the next three years.

The question worth asking is not, “What will the next model be able to do?”
The better question is, “What will I still insist on doing with my own brain?”

NOTE: The post is entirely AI. That is a first for me, something I have avoided, as generally entirely AI written posts are of little value.

While I have used AI to research posts, and help me fill in holes in logic, I have never just posted output without extremely heavy editing.

It seems AI has actually increased the time it takes me to get a post to publishable form. Not the expected outcome.

The reason is there is so much AI slop out there, mangled, generic stuff that adds little if anything to the intellectual capital I am trying to feed, but it blots out the originality I strive for. While AI is a great helper, it is in  no way a creative one.  To stand out amongst the slop, each post now takes more time than three or four years ago.

However, it is getting better every day. Theis post was done after I dictated a number of disconnected ideas that had been rattling around without much form, or hope of becoming anything useful. So, I dictated into Chat, and the output is there for you to judge.

In my view, it needs some editing!!

 

The top 7 ways to measure continuous improvement.

The top 7 ways to measure continuous improvement.

 

Continuous Improvement is a mindset, the improvements sought are on their own often tiny, seemingly irrelevant in isolation. The point is in the compounding of the improvements over time that delivers the improved outcome. Proclamations from the CEO, group ‘Love-ins’ and slogans on the lunchroom wall have no impact.

It is also true that not all improvements are easy to measure. How do you measure the culture that supports and feeds continuous improvement?

However, there are things you can measure that will be leading indicators of CI

  • Cycle time. Measuring the cycle time of processes, seeking to shorten them is always an indicator of improvement. Almost all improvement activities I have seen and been involved with use time as a measure of performance.
  • Product quality. A common problem with measuring quality is defining just what the term means. To me it is very simply compliance to specifications, which is generally easy to measure, once you have agreed the specs. The most common tool is a ‘Control chart’ that graphs the upper and lower limits of acceptable adherence to specifications. It can be used equally well to measure the tolerances on a machine output, to cycle times of any process, and responses to a lead generation program.
  • Customer satisfaction. Asking customers is a good place to start. There is plenty of research around that indicates that the degree of customer satisfaction an enterprise thinks they are delivering, and what the degree is when their customers are asked differs wildly. Independent surveys can be very informative, and tools like the net promoter score framework, can deliver the numbers sought by the corner office. To me the very best measures are the rate of return customers and lifetime customer value compared to industry peers.
  • Ratios. Driven by the strategic priorities, every business will have the opportunity to employ differing ratios that reflect the alignment with the strategic priority. For example, revenue/employee, right-first-time/installations, new customer revenue/total revenue, the list can go on. However, the catch is to have as few KPI’s as possible, cascaded through the organisation that enable the drivers of success to be made very visible. For example, a former client instigated a company wide KPI of Gross margin/employee. This KPI was used company wide, and within individual functions and work groups through the organisation. It focussed company wide attention on activities that drove revenue and the COGS.
  • Employee generated ideas. Have a formal process of encouraging, gathering, sorting, and acting on the ideas coming from the front line. It is always the case that those closest to the action see the opportunities better than those further up the line. Engage them in genuine process improvement, which as a huge side benefit. This sort of employee engagement builds a robust culture. A culture that measures, celebrates, and implements small ideas is the real engine of continuous improvement.
  • Employee satisfaction. The old wives saying ‘happy wife, Happy life’ applies equally to employees. Happy, motivated employees are perhaps the best way to ensure that customers are well treated, and therefore return, and are prepared to refer you.
  • Financial ROI. The last in this list, but most obvious and most often used. You make an investment, you want a return, and the accountants will deliver up a way to count it. Benefit divided by Cost of implementation. The challenge is putting some numbers around the benefit. At best these measures are appropriate in specific circumstances where there is some hard capex being made to improve one of the above parameters.

Header cartoon courtesy of GapingVoid.com

 

 

 

AI is tipping the organisational pyramid

AI is tipping the organisational pyramid

 

 

AI is stripping out the commercial friction that previously required middle management as coordinators.

The old vertical model, with layers of functions passing work up and down the pyramid, is being replaced by horizontal flows of cross-functional orchestration.

Traditional organisations run on vertical alignment. Each function optimises its own sequence of tasks, reporting neatly up the line. It looks tidy on a chart but in reality, can be chaotic.

Customers do not live in your vertical world. They move sideways, across sales, production, logistics, and service, expecting a seamless experience.

AI is flipping that organisational pyramid on its side. It connects once-isolated functions into a single horizontal process. What was once delegated up and down now needs to be orchestrated across.

Sequential processes, the bread and butter of functional work, are predictable. They are easy to automate and improve. However, the processes that serve customers are not sequential. They are coordinated, and they demand awareness of what is happening across functions, not just within them.

This difference matters. Sequential work relies on delegation. Coordinated work requires orchestration. The first is mechanical; the second is musical.

To orchestrate effectively, AI needs agency. It must be allowed to make choices within parameters, not just follow a script. Without that level of agency, automation collapses into the same bottlenecks middle management used to create while claiming to fix them. True orchestration demands that machines can choose the next note when the music changes.

This is gold for the cost hawks and process zealots who love squeezing inefficiency from sequential work. It is also gold for the customer-facing teams because orchestration delivers something far more valuable: speed. When everything else is roughly equal, price, specification, guarantees, two things decide who wins.

  • Delivered In Full, On Time, (DIFOT) what was promised when it was promised, without error.
  • Cycle time, how fast an order moves from request to fulfilment.

Do both better than the competition and you are operating inside their OODA loop, seeing, deciding, and acting faster than they can react. That’s the sharp edge of AI’s agency.

AI will not just make work faster. It will force organisations to decide whether they develop and trust their AI systems more than their existing sequential and often manual processes. That is not a technical question, it’s cultural: and it is coming faster than most hierarchies can flatten.