Mar 10, 2025 | AI, Leadership
Ever wonder why smart groups often make poor decisions?
Businesses and institutions often slip into ‘Groupthink? From casual groups to formal teams, even when aware of their tendency toward confirmation bias, they naturally favour opinions aligning with prevailing views.
At its worst, Groupthink means ignoring opportunities to consider differing opinions and data and dismissing them when they are presented. This usually leads to choices that with the benefit of hindsight are clearly stupid. Think of it as everyone boarding the wrong train because no one dared to question the destination.
Alignment, that often used management cliche however, is essential for optimal performance. Everyone on the team should clearly understand the direction they’re heading and why that direction matters. To extend the metaphor, everyone on the train knows where it is going, what their role is, what they need to do on the journey to arrive at the declared destination.
True alignment happens when all opinions, information, and data have been carefully considered, weighed, and distilled into a clear consensus. The best choice is obvious, and everyone either fully supports it or at least understands it as the optimal route forward. The strategic challenge is ensuring the destination to which all are aligned, is the optimal choice given the strategic, competitive, and regulatory context.
Are ‘Groupthink’ and ‘Alignment’ synonyms? Or just two sides of the same coin?
Groupthink: Bad. Alignment: Good.
Both can suffer from confirmation bias, even when teams consciously try to avoid it. Alignment can become especially dangerous if unchecked confirmation bias sneaks in.
Many strategies exist to ensure the best choices emerge from challenging decisions. Employing a Devil’s Advocate is one approach to removing any pre-existing bias. It includes techniques like ‘red teaming’, or involving independent external experts for objective interrogation.
Chat GPT 4.5 recently landed in my account with its ‘Deep Research’ capability.
This marks a genuine leap forward for AI.
Earlier models like Chat 3.5 already enabled the asking reflective questions like, “What have I missed?” or “What should I be asking?” Although useful, these prompts typically delivered limited responses.
Chat 4.5 with Deep Research elevates the Devil’s Advocate approach to an entirely new level. It deeply interrogates the topic, reasoning through provided prompts and resources to deliver nuanced, sophisticated, and profoundly useful insights.
This capability changes the game for management teams, provided their commitment to a particular viewpoint doesn’t block genuine consideration of alternatives.
l remember Bill Shorten’s absurd 2012 ‘blind support’ gaffe when asked for a response to PM Julia Gillard’s removal of Peter Slipper as Speaker. He said, “I haven’t seen what she said, but I support whatever it is that she said.” While Shorten understandably wanted to avoid contradicting the PM, his words perfectly illustrate how blindly following a position without any questioning is just dumb. He was however, perfectly aligned with the PM, useful when climbing slippery leadership poles.
Mar 5, 2025 | AI, Leadership
AI is the latest corporate cure-all. Just sprinkle some over your business, and inefficiencies vanish. At least, that’s the pitch.
Everyone from academics and government bureaucrats to consultants, seasoned practitioners, casual observers, and the local conspiracy theorist has an opinion on its transformative power. Digital transformation discussions obsess over AI, treating it as a magic elixir capable of solving all operational woes.
The advice is often generic, but sound: define objectives, assemble teams, allocate resources, identify use cases, research the best tools, establish a process to scale successful experiments, and so on. Logical steps, but there’s a crucial caveat beyond the difficulty of execution: the false assumption that ‘business as usual’ can be improved with a few AI tools.
The gravitational pull of the status quo is underestimated. Many assume that AI’s elegance and utility will naturally override entrenched habits and outdated processes.
It won’t.
Change doesn’t happen because of technology; it happens because there’s an undeniable, compelling reason to shift. That reason must be powerful enough to overcome the inevitable resistance. The benefits of change are often broad and enterprise-wide, but the costs, both real and perceived, tend to be personal, creating the very resistance that stalls progress.
No matter the size or urgency of the change, the Theory of Constraints applies.
The speed of any process, including transformation, is determined by its biggest bottleneck. Identify the constraint, remove it, and then tackle the next biggest friction point. When the constraint is culture, the weight of the status quo, and the psychological safety of individuals, change demands a different approach. To be successful, it must be driven by empathy, engagement, and a keen understanding of what’s really at stake for the individuals at the ‘coalface’ of the change.
The compounding effect of small but continuous improvements is what drives real progress. Rinse and repeat, again, and again.
Used tactically, AI is enormously valuable now and will only accelerate in importance.
I have a three-part mantra for tackling bottlenecks: Automate, Delegate, Eliminate.
AI excels at all three. It automates processes, enables and manages delegation (sometimes through outsourcing), and eliminates inefficiencies by delivering transparency and reducing waste.
However, AI alone is not enough. Re-engineering a process is not about throwing technology at a problem. It requires leadership, a deep understanding of why bottlenecks exist in the first place, and the willingness to take decisive, sometimes radical, action.
The brutal truth: AI doesn’t make bad decisions good, lazy leadership effective, or broken cultures functional. It just automates the mess faster. If organizations don’t adapt, if people, workflows, and mindsets don’t shift, then AI will be nothing more than an expensive distraction.
To truly reap its benefits, businesses must not just implement AI but also create an environment where it can thrive. And that demands real leadership. AI does not lead, it can only go where directed, led to the situations where its ability can be leveraged. If leadership is missing, all AI does is magnify and accelerate the impact of the problems, creating uncertainty on the way.
Feb 3, 2025 | AI, OE, Small business
Sledgehammers in skilled hands can be both a significant tool of productivity, and a destructive force.
AI is the newest sledgehammer on the commercial and personal block.
It gives everyone the opportunity to write a blog, book, opera, make a movie, paint a landscape or portrait, or post an outrageous opinion. It is the most democratising technology ever invented.
What AI does not do, and will never do, is replace the quality of thought and creativity that humans are able to bring to a problem, situation, or creative exercise. However, AI can amplify human ingenuity by offering the opportunity to greatly increase the quality, efficiency, and breadth of thought an individual can bring to a situation.
For small manufacturing businesses and their supply chains, AI is typically seen as a productivity tool. Indeed, it excels at optimising operations, streamlining workflows, and enhancing quality control. More importantly for the future however, it is a tool that expands capabilities, enabling businesses to innovate faster, respond dynamically to market demands, and identify new opportunities before competitors do.
Imagine brainstorming sessions supercharged by AI, where potential solutions are generated, refined, and paired with actionable deployment plans in real-time. This can give small manufacturers a significant edge, allowing them to pivot swiftly in response to challenges and lead their industry through innovation rather than follow.
This has profound implications for talent acquisition and retention.
Rather than just focusing on traditional technical expertise, increasingly available via AI, businesses should prioritise those with ‘flexible minds.’ These individuals may not always be top-tier engineers in terms of mathematical skills, extremely creative marketers, or inquisitive operations managers, but they excel at envisioning multiple outcomes and solving complex problems creatively and rapidly. They can visualise scenarios, identify risks, and devise solutions backwards and forwards, often outperforming those who think only sequentially.
This ability will equip employees for the increasingly complex, variable, and competitive world of modern manufacturing. By leveraging AI to empower employees to perform tasks outside their established skill sets, small businesses can boost innovation, adaptability, and resilience. This not only enhances productivity but also builds a workplace culture that fosters satisfaction, motivation, and long-term growth.
In the past, I have advocated that the primary consideration in identifying productive employees, after being very clear about the required skills to do a job, is curiosity. The emergence of AI elevates curiosity almost to the level, and in some cases, above, the requirement for specific skills.
The risks of ignoring AI adoption are stark. Competitors who embrace AI will gain efficiencies, reduce costs, and innovate faster. Businesses that delay integrating AI will find themselves outperformed, struggling to keep up with quality expectations and delivery timelines.
The question small manufacturing businesses should ask themselves is: Are we willing to risk falling behind, or are we ready to lead the industry through smart, strategic AI adoption?
By increasing participation, independence, and the breadth of employee skills through AI integration, small businesses can secure their competitive advantage and thrive in an AI-driven world.
Jan 29, 2025 | AI, Governance, Strategy
The tech news of the decade blew up on Monday January 27, 2025.
Nvidia, the darling stock of the AI revolution dropped six hundred billion (17%) in market capitalisation in one day. This is the biggest one day loss in stock market history. It sparked a selloff of other tech stocks, leading to a sector drop of 5.6%.
Has the bubble burst, or is it just the theories of Clayton Christianson writ large, again?
The spark was the recognition of the impact of the Chinese AI architecture represented by DeepSeek R1 by the technical wizards and stock analysts.
Surprisingly, DeepSeek released a research paper outlining their approach to AI training. This details an architecture that dramatically reduces cost and complexity of training LLM’s while delivering results at least as good as OpenAI and comparable models. It took a week or so for the described technology and results to be absorbed and understood, culminating in Mondays panicked sell-off.
Is this a bubble bursting or just a sensible reordering of expectations?
Two factors outside corporate malaise have dogged my innovative efforts over the years, both of which are in play here:
- The notion that innovation takes place in an environment of constraints. While history demonstrates the truth of this, the stories we tell ourselves celebrate what appears to be great innovation emerging as a result of chaos. In this case, the restrictions placed on China getting the existing technology created restrictions they have beaten.
- What I call the ‘Christianson effect’, better known as the Innovators Dilemma, after Harvard professor Clayton Christianson is proven accurate time after time, after time. Again, Christianson accurately saw that a high cost solution to a problem would eventually be replaced by a much lower cost solution to the same problem. DeepSeek is just another example of the power of his observation.
The US under the Biden administration for security reasons put export bans on Nvidia chips, chipmaking tools, and development software. These bans covered US allies in an effort to isolate China from the Intellectual capital as well as the means to bridge the technology gap that suddenly appeared. It would appear that rather than accepting the ban and going home, the Chinese reacted by using the ban as a motivator to rethink the engineering of the guts of AI systems, and come up with a solution that addressed the two hurdles facing current AI:
- The enormous amounts of data required to train the models.
- The huge drain on power required to process even modest requests to the models for a response.
Both it would seem, are gamechangers, as the cost reduction probable for AI platforms is enormous.
The real question for those who run businesses that use this technology, or are starting to use it more generally in our lives, which is all of us, is what comes next?
Here is what I think, assuming the initial hype is close to the mark, and not another chimera like the Theranos scam.
- The huge allocations of capital being made by the big US companies, Microsoft, Google, Amazon, and Meta, will be put on ice. Nvidia has hundreds of billions of dollars in orders from these giants that it cannot currently adequately fill. Some if not many will be quietly cancelled.
- More billions allocated to build the infrastructure to accommodate the models, big chunks of expensive land, and power sources will also be slowed down. For example, the project called the ‘Stargate project’ triumphantly announced last week by the president involving a 500 billion dollar investment by the government will become just another Trump press release consigned to the round file. The project as outlined is a JV with Oracle, Microsoft, Softbank, and others to build AI capability in the US. It represented an equity investment by the government in the commercial leveraging of emerging technology, a first. I also speculate that the proposal to fire up a mothballed nuclear reactor at 3 Mile Island by Microsoft will require a rethink, although it may have just been at best, a thought-bubble.
- The disruption created by the DeepSeek technology will redirect the tsunami of capital towards Chinese technology, until the next innovation iteration comes along. This will both geometrically accelerate the rate of adoption necessary by business if they want to keep up with competitors, and make the current security concerns surrounding Tik Tok look trivial by comparison.
- The disruption might ‘democratise’ the use of AI in the sense that it will be more widely available once the costs are dramatically reduced. Alternatively, it may mean that the existing ‘moat’ controlled by the current crop of AI platforms, all American, will be replaced by a Chinese moat.
- Regulating AI in some way has been a topic of frantic debate since OpenAI launched Chat. To observe that regulators have no idea would be accurate. Now, instead of regulators being caught with their pants around their ankles, it is apparent that their pants, if they own any, are secure in the wardrobe. In a regulatory and geopolitical sense, we are spinning out of control.
- The rate of development of systems that enable humans to expand the reach and depth of the intelligence we evolved to have will be extended at a rate that is further accelerated by the huge reduction in cost that appears probable as a result of this Chinese breakthrough. We had better all start learning Mandarin.
As the old Chinese saying goes ‘We live in interesting times’
Jan 24, 2025 | AI, Marketing
Close your eyes.
Now think of the sound that happens when you open Netflix or HBO, the cello riff at the opening of Game of Thrones, the McDonalds ‘ba dada boop ba’ that ends every ad.
You can ‘hear’ them in your mind, they are an unambiguous reminder of what you are about to see and hear.
Think now of a song that meant something important to you when you were growing up. All you need are the opening bars of the music.
Can you feel the emotion that memory brings?
We humans are very tuned in to music (apologies for the poor pun). Somehow is sticks in our brain, and opens a door to our memories, emotions, and situations.
How would you like to have a sound that to your customers, wider networks, and those who have a casual acquaintance with your brand, brings your value proposition straight into their brain?
In the past that marketing luxury has been the territory of large companies with large marketing budgets. You had to pay songwriters, musicians, pay royalties, hire studios, session singers, or even celebrities.
All very expensive and time consuming.
Not now.
Now you can do it in a few hours at most with an AI tool (CHAT, Claude, Gemini, et al) that will write lyrics for you, and another tool to deliver you the sounds to order. Want your lyrics to be performed in the genre of country, pop, hip-hop, metal, whatever, tell the tool, and it will deliver. It will take some iterations, and prompting can be a challenge as music is much less specific than prompting using text., but you can get there.
There are several AI sound generators. Suno.ai is the tool of choice of a mate who has experimented with several, and which I found to be amazing, but there are others.
Want that sonic brand identifier?
It is now easily within your reach.
Jan 13, 2025 | AI
‘Tokenisation’ is a term bandied about by my AI literate colleagues, commonly at least 40 years my junior. Usually, the term is used in the context of ‘it takes X number of tokens to complete this job’ or similar.
In AI, tokenization is like breaking a sentence or text into smaller pieces (tokens) so that a computer can understand and based on statistics, predict what the next word (token) will be. The process takes a paragraph and chops it into words, parts of words, or even individual letters now called tokens. This then enables the system to find patterns and make predictions of the next part of the text using probabilities derived from the context represented by the previous tokens.
All AI platforms like ChatGPT, Perplexity, Claude, and myriads of tools that have emerged from the woodwork over the past 2 years, are trained to understand and generate tokens in this manner. They ‘learn’ by examining tons of text and figuring out how words and sentences relate to each other statistically. Tokenisation is the first step in this process.
When you give text to an AI tool, it:
- Breaks the text into ‘tokens’.
- Assigns a number to each token (a kind of code).
- Processes these numbers to understand statistically the patterns and relationships.
- Uses this understanding to answer questions, summarize, or generate text or in some cases, a graphic representation of the tokens.
There are different ways to break down text, depending on the instructions given to the model. It is also why the response to instructions can sometimes go crazy, as the machine does not always ‘hear’ what you thought you said. The placement of something as simple as a comma in the instructions can, and often will, alter the output.
This breaking down of text into ‘tokens’ is an essential step in the AI process. It is all about statistics and patterns, without any meaning as we understand it being given to the words themselves.
AI is a predictive machine that gives you the next most likely outcome given the instructions you have given the model. The way those instructions have been interpreted based on the way the tokens drive the patterns and relationships, delivers the outcome.
It is also how AI can work across languages, and why it consumes huge amounts of energy to run the billions of statistical calculations underlying the response it gives.
The above is a vast simplification of the process, but it ‘sort of’ satisfies an old marketer like me, trying to understand this new world that has suddenly arrived on my doorstep. It also explains the limitations of the models, as the ‘training’ is done on existing data. The system has no capability to leverage ‘knowledge,’ that human capability that enables completely disconnected facts and ideas to be put together in entirely new ways. I can only assume that this is where the current research is directed, building the ‘neural networks’ artificially, that we have as an outcome of millions of years of evolution.