Synthetic Data: A Game Changer for Small Business.

 Synthetic Data: A Game Changer for Small Business.

 

 

AI promises a multitude of productivity benefits for all enterprises.

For the thousands of SMEs competing with much larger rivals, AI offers the potential for easily accessible, reliable, and credible data on an unprecedented scale.

One such opportunity lies in market research, which has often been out of reach for SMEs due to its high cost.

AI systems are sophisticated probability machines. Given a base to ‘learn’ from and a set of instructions, AI can predict the next letter, word, sentence, illustration, piece of code, or conclusion. Feed it the right data to learn from, prompt that ‘learning’ with instructions, and the probability machine goes to work.

‘Synthetic data’ is the analysed outcome of a well-articulated AI search for relevant data from publicly available sources, potentially enhanced by data from a company’s own resources.

For instance, an FMCG supplier might need ‘attitude and usage’ research to support ranging of a new product in major retailers. Traditionally, they might spend $100-200k on a combined qualitative and quantitative market research project, which could take several months to complete.

Way out of the reach of most SME’s.

Alternatively, they could invest $15-25k in an AI application to scan social media, relevant publicly available statistics, and their own sales and scan data. This AI-generated ‘synthetic data’ might not be quite as accurate as a well-designed and executed market research study. However, it could be produced quickly, relatively cheaply, and be sufficiently accurate to provide compelling market insights and consumer behaviour forecasts.

Suddenly, opportunities previously out of reach for SME’s can be leveraged. Combined with their shorter decision cycles and less risk averse nature, SME’s now have the potential to haul back some of the ground they have lost to deeper pocketed large businesses.

Header illustration is via a free AI tool. it took less than 30 seconds to brief and deliver.

 

 

5 ways to discriminate between the guru and the copy-cat?

5 ways to discriminate between the guru and the copy-cat?

 

 

Increasingly, we must distinguish between ‘content’ created by some AI tool, masquerading as thought leadership and advice, and the genuine output of experts seeking to inform, encourage debate and deepen the pool of knowledge.

I’m constantly reminded as I read and hear the superficial nonsense spread around as serious advice, of the story Charlie Munger often told of Max Planck and his chauffeur.

Doctor Planck had been touring Europe giving the same lecture on quantum mechanics to scientific audiences. His constant chauffeur had heard the presentation many times, and had learnt it by heart. One night in Munich, he suggested that he give the lecture while Doctor Planck acting as the chauffeur sat in the audience, resting.

After a well received presentation a question from a professor was asked to which the chauffeur responded, ‘I am surprised that in an advanced city like Munich, I get such an elementary question. I am going to ask my chauffeur to respond’.

It is hard at a superficial level to tell the difference between a genuine expert, and someone who has just learned the lines.

To tell the difference between those two you must

  • Dig deeper to determine the depth of knowledge, where it came from. Personal stories and anecdotes are always a good market of originality.
  • Understand how the information adjusts to different circumstances, and contexts. An inability to articulate the ‘edge’ situations offers insight to the depth of thinking that has occurred.
  • Look for the sources of the information being delivered. Peer reviewed papers and research is always better than some random Youtube channel curated for numbers to generate ad revenue.
  • Consider the ‘tone of voice’ in which the commentary is delivered. AI generated material will be generic, bland, average. By contrast, genuine originality will always display the verbal, written and presentation characteristics of the originator.
  • Challenge the ‘expert’ to break down the complexity of the idea into simple terms that a 10 year old would understand.

These will indicate to you the degree of understanding from first principles, the building blocks of knowledge, that the ‘Guru’ has.

The header is a photo of Max Planck in his study, without his chauffeur.

 

 

 

The ultimate ‘AI machine’ between our ears.

The ultimate ‘AI machine’ between our ears.

 

 

Our brains work on 3 levels.

At the most basic is the ‘reptilian brain.’ This is the ancient wiring that is common with every other animal. It monitors and manages the automatic things that must happen for life. Our instincts, temperature control, heart rate, respiration reproductive drives, everything necessary for the survival of the animal.

The limbic system. This manages our emotional lives, fear, arousal, memories, it is where we store our beliefs. It in effect provides the framework through which we look to make sense of the world.

The Neo cortex, the newest part of our brain that differentiates us from other animals. It is where we make choices, it controls our language, imagination, and self-awareness.

This three-part picture is a metaphor. The parts of the brain do not act independently, but in an entirely integrated manner, each having an impact on the others, and receiving input from the others.

Consider the parts of this complex interconnected and interdependent neuro system that is replaceable by AI. There is not all that many of them, beyond the extrapolation of language and imagery from what is in the past.

Despite the hype, we have a long way to go before artificial sentience will be achieved, if it is possible. (Expert opinion varies from ‘Within the decade’ to ‘Never’).

However, who cares?

The productivity gains from AI are present in some form in every current job, and the numbers of new jobs that will emerge are huge. Nobody had conceived of the job of ‘prompt engineer’ 3 years ago!

These new jobs in combination with the renewal of those currently available, will deliver satisfaction, and a standard of living out kids will thank us for.

Sadly, there is always a flip side. In this case it is the dark downsides we all see emerging from social media, which will also be on steroids, and the social dislocation that will occur to those on the sharp end of the changes in jobs.

How we manage that balance will be the challenge of the 2030’s.

 

Image by Canva.com

 

Neglect to Necessity:  Infrastructure is a gift.

Neglect to Necessity:  Infrastructure is a gift.

 

 

In a world dominated by discussions around AI, electrification to ‘save the planet’ and its impact on white collar and service jobs, the public seems to miss something fundamental.

All this scaling of electrification to replace fossil fuel, power the new world of AI, and maintain our standard of living, requires massive infrastructure renewal.

Construction of that essential electricity infrastructure requires many skilled people in many functions. From design through fabrication to installation, to operational management and maintenance, people are required. It also requires ‘satellite infrastructure’, the roads, bridges, drivers, trucks, and so on.

None of the benefits of economy wide electrification and AI can be delivered in the absence of investment in the hard assets.

Luckily, investment in infrastructure, hard as it may be to fund in the face of competing and increasing demands on public funds, is a gift we give to our descendants.

I have been highly critical of choices made over the last 35 years which have gutted our investment in infrastructure, science, education, and practical training. Much of what is left has been outsourced to profit making enterprises which ultimately charge more for less.

That is the way monopoly pricing works.

When governments outsource natural monopolies, fat profits to a few emerge very quickly at the long-term expense of the community.

Our investment in the technology to mitigate the impact of climate change is inherently in the interests of our descendants. Not just because we leave them a planet in better shape than it is heading currently, but because we leave them with the infrastructure that has enabled that climate technology to be deployed.

Why are we dancing around short-term partisan fairy tales, procrastinating, and ultimately, delivering sub-standard outcomes to our grandchildren?

Header illustration via Gemini.ai

 

The two separate faces of AI.

The two separate faces of AI.

 

AI is the latest new shiny thing in everybody’s sightline.

It seems to me that AI has two faces, a bit like the Roman God Janus.

On one hand we have the large language models or Generatively Pre-trained Transformers, and on the other we have the tools that can be built by just about anyone to do a specific task, or range of tasks, using the GPT’s.

The former requires huge ongoing capital investments in the technology, and infrastructure necessary for operations. There are only a few companies in the position to make those investments: Microsoft, Amazon, Meta, Apple, and perhaps a few others should they choose to do so. (in former days, Governments might consider investing in such fundamental infrastructure, as they did in roads, power generation, water infrastructure)

At the other end of the scale are the tools which anybody could build using the technology provided by the owners of the core technology and infrastructure.

These are entirely different.

Imagine if Thomas Edison and Nikola Tesla between them had managed to be the only ones in a position to generate electricity. They sold that energy to anybody who had a use for it from powering factories, to powering the Internet, to home appliances.

That is the situation we now have with those few who own access to the technology and anybody else who chooses to build on top of it.

The business models that enabled both to grow and prosper are as yet unclear, but becoming clearer every day.

For example, Apple has spent billions developing the technology behind Siri and Vision Pro, neither of which has evolved into a winning position. In early June (2024) Apple and OpenAI did a deal to incorporate ChatGPT into the Apple operating system.

It is a strategic master stroke.

Apple will build a giant toll booth into the hyper-loyal and generally cashed up user base of Apple. Going one step further, they have branded it ‘Apple Intelligence’. In effect, they have created an ‘AI house-brand.’ Others commit to the investment, and Apple charges for access to their user base, with almost no marginal cost.

Down the track, Apple will conduct an auction amongst the few suppliers of AI technology and infrastructure for that access to their user base. To wrangle an old metaphor, they stopped digging for gold, and started selling shovels.

Masterstroke.

It means they can move their focus from the core GPT technology, to providing elegant tools to users of the Apple ecosystem, and charge for the access.

What will be important in the future is not just the foundation technology, which will be in a few hands, but the task specific tools that are built on top of the technology, leveraging its power.

 

 

The 74 year journey of AI

The 74 year journey of AI

 

 

We’re all familiar with the standard XY graph. It shows us a point on 2 dimensions.

AI does a similar thing except that it has millions, and more recently, trillions, of dimensions.

Those dimensions are defined by the words we write into the instructions, built upon the base of raw data to which the machine has access.

The output from AI is a function of the data that the particular AI tool has been ‘trained’ on and accesses to respond to the instructions given.

Every letter, word, and sentence, generated is a probability estimate given what has been said previously in the database of what the next word, sentence, paragraph, chapter, and so on, will be.

Generative pre-training of digital models goes back into the 1990’s. Usually it was just called ‘machine learning’, which plays down the ability of machines to identify patterns in data and generate further data-points that fit those patterns. The revolution came with the word ‘transformer’, the T in ChatGPT. This came from the seminal AI paper written inside Google in 2017 called ‘Attention is all you need’.

The simple way to think about a transformer, is to imagine a digital version of a neural network similar to the one that drives our brains. We make connections, based on the combination of what we see, hear, and read, with our own domain knowledge history and attitudes acting as guardrails. A machine simulates that by its access to all the data it has been ‘trained on’, and applies the instructions we give it to then assemble from the data the best answer to the question asked.

The very first paper on AI was written by Alan Turing in 1950 was entitled ‘Computing machinery and intelligence’. He speculated on the possibility of creating machines that think, introducing the concept of what is now known as the ‘Turing Test.’

The original idea that drove the development of the transformer model by Google was a desire to build a superior search capability. When that was achieved, suddenly the other capabilities became evident.

Google then started thinking about the ramifications of releasing the tool, and hesitated, while Microsoft who had been also investing heavily through OpenAI, which started as a non-profit, beat them to a release date, forcing Google to follow quickly, stumbling along the way.

Since the release of ChatGPT3 on November 20, 2022, AI has become an avalanche of tools rapidly expanding to change the way we think about work, education, and the future.

 

Header cartoon credit: Tom Gauld in New Scientist.