Oct 8, 2025 | Analytics, Leadership, Operations
We have learned over time, led by Toyota, that ‘root cause analysis’ thereby seeing the root cause of problems is the road to continuous improvement.
At any time when there is a problem, do not let it get papered over, do not let the symptoms be treated, dig and dig until you understand the root cause and then fix it.
Often this is a challenging task, root causes by their nature are usually well hidden, and often ambiguous until there is a forensic examination. However, they are always there and rooting them out enables a compounding of improvements over time.
That analysis requires a cultural context in which to work, as it takes time, consumes resources, and is never completed, as there is always another problem to be analysed. That is the nature of problems, root out one bottleneck, and the blockage just moves to the next spot, previously hidden by the former one.
However, we also seem to look at a process from its beginning, setting out to define a hidden problem occurring inside the process.
Should we reverse the order, and look at the causes of success?
Why and how has Toyota managed to remake themselves from the crappy stuff carrying the lousy quality implications of ‘made in Japan’ from my childhood to an icon of quality, and in the process, driven change through manufacturing globally?
What is the root cause of their success?
My contention is that the root cause is a simple piece of rope.
The Andon cord.
Toyota put Andon cords through their factories, so that any person on the line could stop the line at any time when they saw a fault.
Not only were they empowered to stop the line, they were expected to do so any time a problem occurred that could not be fixed in the time allowed at that station in the line. When the line was stopped by a worker, the supervisor immediately went to the stoppage point with two objectives:
- Solve the problem to ensure it would not be repeated, and that the problem got not one step closer to a customer.
- To congratulate the worker for stopping the line so the problem could be fixed. This ensured there was not any reluctance to address a problem by such radical means as stopping a whole factory.
This is an extreme example of empowering the front line, making those who can see problems as they face them all the time, responsible for fixing them.
When introduced, this must have caused headaches, as the productivity would have plummeted. The number of cars produced dropped off a cliff, but those that got through would be as good as they could be, and slowly, as problems were solved, productivity rose, quality rose, as over time Toyota became the benchmark for motor vehicle quality around the world.
All from a simple piece of rope, and the surrounding culture that delivered to those at the coal face, the responsibility to exercise their right to pull it.
What is the equivalent of the Toyota Andon cord in your business?
Sep 1, 2025 | Analytics, Marketing
Goodhart’s law tells us that when a measure becomes a KPI, it ceases to be a good measure. The full text of his observation appeared in the footnotes of his presentation at a conference in 1970 held by the Reserve Bank of Australia: “Whenever a government seeks to rely on previously observed statistical regularity for control purposes that regularity will collapse’
That observation is as relevant to every enterprise as it is to government.
Every dashboard produced by CRM systems I have seen makes the mistake of trashing Dr. Goodhardt’s insight.
Customers are subjected to all sorts of profiling as marketers do another iteration of their ‘ideal customers’ and ‘customer Journey’ maps using a different set of assumptions.
One set rarely used, at least rarely in my experience is the ‘inert’ customer.
Most analyses of customers I have seen use some variation of the pareto distribution. A few customers are deemed heavy users, with a decreasing level of usage down to light and occasional. The only alternative to one of these descriptions is ‘Lost’.
Any examination of ‘lost’ customers will reveal that a significant percentage of them are those that simply went ‘inert’ following a failure of customer service to meet their expectations.
For some, the expectations are unrealistic, for others, it is more like a metaphorical shrug of the shoulders. These customers are not lost, they may be just inert, and possibly able to be reactivated by a demonstration of the customer service they expected being available.
Figuring out who in your lost customers list is really just inert may just require asking. This is always far cheaper, and in my experience, more effective than hunting for new customers.
A former client had an extensive list of what they deemed ‘lost’ opportunities. They sold (and still do) a complex product that did a specific job much better than the alternative standard product. When they interrogated that ‘Lost’ data base they discovered that a sizeable proportion had not bought elsewhere. They had gone ‘inert.’ Some were just waiting for a ‘nudge’ which was often just the information and reassurance that the product they initially enquired about was the most appropriate for the job that they had put on the back burner.
Header: Charles Goodhardt speaking to a large audience.
Aug 22, 2025 | Analytics, Branding, Marketing, Uncategorized
The Pareto principle, the 80/20 rule with variation in the numbers, works in every situation I have ever seen.
Almost.
It is the exception that makes the rule.
Marketers use it extensively to allocate marketing budgets across competing arenas. Define your ideal customer, understand purchase cycles and habits, recognise different behaviours in different channels and circumstances, and allocate accordingly.
It always seemed to both make sense and work well.
Until it did not.
Research done by Andrew Ehrenberg, Gerald Goodhardt, and Chris Chatfield in 1984 produced a statistical model called the ‘Dirichlet Model‘. It is a statistical reflection of how consumers actually behave across FMCG categories. The model showed that rather than repetitive brand loyalty, most consumers buy from a small repertoire of acceptable options.
The model reveals that many people purchase a brand only now and then, yet collectively they represent a huge share of total sales. This counters the popular pareto model that assumes 80% of profit comes from the top 20% of buyers.
Hovering around supermarket shelves in the eighties, observing consumer behaviour, and interacting where possible, the truth of this counter intuitive behaviour was clear. However, the pull of Pareto was powerful, so we often had a foot in both camps.
It is the mid 1980s, and yogurt is the new category growth star. Ski and Yoplait dominate store shelves. Shoppers have their personal preferences, some lean strongly toward Ski, others swear by Yoplait, and many have their flavour favourite across both. (they prefer Ski Strawberry to Yoplait, but the Yoplait apricot to Ski) A few smaller regional players also vie for attention, but if there’s a promotion, most consumers happily mix it up.
As the marketing manager that included Ski in the brand portfolio of responsibility during these heady growth days, it was easy to assume the Pareto principle held: 80% of profits come from 20% of devoted buyers. Focus on those heavy consumers, turn the moderate fans into loyalists, and watch the profits roll in, right?
The Dirichlet model exposed the paradox, although at the time I had not heard of it. However, the numbers coming from store sales data and simple observation of consumer behaviour in stores confirmed the consumers disassociation from the theory of Wilfredo Pareto.
So, how does the Dirichlet model suggest fast moving consumer marketers build their brands against competitive brands and the power of retailer ‘pirate’ brands?
Acknowledge Mixed Brand Buying.
Even if you’re proud of your loyal fans, don’t be blinded by them. Ski-lovers might switch to Yoplait for a flavour your brand doesn’t offer, or vice versa. The data shows people happily shop around, even if they have a ‘Favorite.’ Acknowlede that behaviour while creatively giving consumers reasons to buy yours in preference to others.
Look for Wider Reach.
Heavy users are part of the story, but broad availability is often the bigger deal. If your products aren’t visible, buyers won’t remember you at that decisive moment.
Keep Things Distinctive.
You’re not just building brand awareness, you’re building mental availability. That’s how you stay top-of-mind when the shopper sees a new promotion or wants a unique flavour. Whether through catchy ads, recognizable packaging, or fun limited-edition variants, it’s all about creating mental triggers.
Rotate and Refresh.
Both leading yoghurt brands tested new flavours and replaced underperformers regularly. This strategy not only sparked interest among loyal buyers but also tempted the light or occasional buyer who came for the novelty, and might just pick you again next time. It also pleased retailers to have a supplier that explicitly had a ‘one in one out’ brand policy.
Ultimately, the Dirichlet model teaches us that brand loyalty isn’t an all-or-nothing affair. Even with strong preferences, people jump around.
Consider that next time you’re rethinking a marketing campaign. It might feel odd to invest in those who buy you only once in a while, but that large group can deliver a collective boost that keeps you on top.
Header by AI
Pareto killed by Dirichlet in blogs
The pareto principle holds in every domain I have ever seen, except one.
To build a brand, you must keep existing customers, increase their preference for your brand, and attract new customers.
A pareto allocation of marketing funds would imply that most of your budget should be aimed at the 20% of customers that produce 80% of your profit.
That allocation would work against you.
Truly loyal customers are less likely to go elsewhere than light or occasional buyers, and such an allocation does nothing to attract new users.
In this case, Pareto was wrong.
Aug 7, 2025 | Analytics, Branding
Investing in building a brand is only done by rational people when they can reasonably expect a return on that investment in the future.
Even with the benefit of hindsight, putting a value on a brand is an exercise in both judgement and maths. Most disregard the maths and invest because the marketing textbook says it is a good idea.
The outcome of having a brand with market power is increased cashflow. Luckily, cashflow is measurable absolutely in the past, and possible within boundaries of probability in the immediate future. The challenge is setting those boundaries of probability.
The purpose of investing in a brand building program is very simply to build incremental future cash flow. To that end, there are only three considerations.
Mental availability.
A brand provides ‘mental availability’ when a potential prospect is in the market. This is a different metric to the more common ‘awareness’ metric, as it implies that when a potential customer starts the process of considering alternatives to addressing a challenge they face, your brand is front and centre. Awareness lacks the second component. Coca Cola has huge awareness, but that does not do you much good when you come into the market for a new car.
Differentiation.
A brand articulates in a prospects mind why they should use you rather than a competitor to address their problem. Differentiation is only useful when it is wrapped around something competitors cannot or choose not to do. To continue the car analogy. For the last few years, a few new car buyers wanted to be seen as a sensible, ecologically aware and able to afford an expensive car that projects a specific image felt an electric car was the best option. Tesla was the only viable choice. That has rapidly changed as competitors finally caught up the technology, are producing objectively better cars at a cheaper price, at a time when the Tesla brand has been tarnished.
Greater value.
The third is the promise to deliver value to the customer greater than they would find elsewhere. Value is not the cost to the customer, although it is a key component of the equation.
Value = Utility – Cost.
As you increase utility, the impact of cost on the calculus of value lessens. Conversely, utility is a combination of quantitative and qualitative assessments every buyer will make, usually without great consideration. In the case of Tesla, the utility of owning one of their cars has been reduced substantially, so the value of the brand has been significantly reduced.
The key question in attaching a value to a brand is therefore the ‘Utility’ it delivers that is convertible into future cash flow.
In the literature, and established practises of those who value brands for a living, there are many ways to do the calculation.
Following are a few of the more obvious.
- Brand attributable cash flow. What percentage of gross margin can be attributed to the brand. Attribution is one of the ‘stickiest’ problems in marketing, so be very careful to separate reality from what you would like to believe. User research is the only way to be as sure as you can be.
- Royalty premium. Brand licencing is a well-worn track. What would you be prepared to pay to have that aspirational brand on your product?
- Price premium, and elasticity. What premium to the market average does your brand attract, and how ‘price elastic’ is that premium? Years ago when Meadow Lea was king of the margarine market, it held a dominating market share at premium prices. This meant that the benchmark shelf price was roughly the same as the alternative brands, usually just a cent or two more, but the promotional discounts demanded by retailers were less, we were able to attract significant added shelf space as there was ‘pantry stocking’ happening when of special, we won the preferred time slots for promotion, we did not need to promote quite so often, and the volume differences between standard sales at standard shelf price and on promotion sales were not as dramatic as competitors. It is not always just the headline price that counts.
- Weighted distribution. What percentage of distribution points that could stock your brand do so. This is mostly a measure for B2C, and it is too often forgotten.
- Customer repeat purchase. How often does a customer purchase your brand compared to others? Repeat purchase, particularly at non promotion prices is the holy grail of FMCG marketing. Price discounting in FMCG has just about destroyed all but the very best brands, so this measure needs to be able to filter out price as the motivation. For example, a house-brand at a standard discounted price to the market may seem to have a high repeat purchase rate, but price can be the determining driver for some consumers. Besides, an expectation of a low price is a lousy brand attribute, and does not contribute meaningfully to brand value.
- Net promoter score. This has become a widely used measure, and sadly, widely misused. Every time I pay an insurance premium, I get an emailed NPS survey asking about my experience. Clearly, the insurance company marketing people are deluded about the drivers of the payment of another insurance premium.
- Mental availability. This is the ‘kingmaker’ measure in many markets. I added it here as it is a measure that is calculatable with market research.
The word ‘Brand’ can mean many different things to a casual observer. To those who understand the word from a commercial perspective it is simply a device that is an indicator of the probability of future cash flow.
Feb 24, 2025 | Analytics, Strategy
The ‘Power law of Distribution’ or ‘Zipf’ distribution, can be used as an adjunct to the much better understood Pareto principle.
There is a consistency to the structure of mature markets. There is a dominating leader, followed by a long tail of smaller competitors. The size rank of an enterprise inversely correlates with its market share.
This is the Zipf distribution at work.
Zipf comes from the study of linguistics, where the probabilities of the frequency of words occurring in a written piece was identified by American Linguist George Zipf in 1935. In summary, the characteristic of a Zipf distribution is that the most common item appears approximately twice as often as the second most common, and three times more often than the third most common, and so on.
For example, the most common word appearing in an English text is ‘the’ which appears twice as often as the second most common word ‘of’, and three times as often as the subsequent word. This relationship has been validated across languages and the sophistication of language use via the free Gutenberg Project, a free database of 30,000 works. The obvious use is in the statistical probability calculations used to generate the tokens that deliver us output from AI platforms. It also powers the language translation capabilities of digital tools.
Zipf distributions occur across many domains beyond language. Income distribution, population sizes, numbers tuning in to TV shows, and followers of so called ‘influencers’.
So, how do you use this when thinking strategically about how to break into a market where you are somewhere in the long tail of a Pareto chart?
It is a problem faced by most businesses in competitive markets. The big players get all the attention, leaving little for the small players to fight over.
The answer: Identify an existing niche and own it, or better still, create your own niche, and be the dominating player in a Zipf distribution for that market segment.
Fragmented markets with a wide range of competitive offers tend to consolidate over time into a small number of players that dominate. Typically, the number one competitor evolves to be double the market share of the next.
This occurred when ‘Meadow Lea’ emerged from the crowd of margarine brands in the late seventies. It became the dominant brand with a market share over 20% (at a premium price) with the next brand in line, ‘Flora’, having a share from memory that never climbed over 8%. Then came ‘Miracle’ margarine maxing out at about 5% before going down the gurgler.
‘Apple’ created the smartphone niche, which then became the whole mobile phone market. They led the emerging market in volume until Google released Android, and allowed anyone to use it. Apple no longer holds market volume leadership, currently they are around 15% volume share, but still hold profitability leadership at about 80% of mobile phone profit share, a clear example of a Zipf distribution.
Which would you rather have?
These ‘Zipf dominators’ do not happen by accident.
They are created by a combination of the identification of unmet demand, creation and/or leveraging of a market niche, and an emotional connection compounded by long term brand building.
When you are the second brand, chasing a Zipf dominator, life is tough. It will take strategic insight, investment, time, and perseverance to prevail. Critically, it also requires a deeply strategic analysis of customer behavior and needs to be able to see the ‘white space’ than becomes ‘Zipfable’
Header George Zipf courtesy Wikipedia
Feb 6, 2025 | Analytics
The Rule of 72 is a ‘rule of thumb’ calculation used to quickly estimate how long it will take for an investment to double in value, given a fixed annual rate of return.
It was first introduced by Italian mathematician Luca Pacioli in 1494, a collaborator of Leonardo Da Vinci. Pacioli is best known as the codification of double entry book-keeping, and the reporting of transactions via journals and ledgers, and outcomes via profit and loss and balance sheet.
His Rule of 72 is widely used in the initial ‘back of the envelope’ assessment of investment options.
The formula is: Years to double = 72/Annual rate of return.
For example, if an investment has an annual rate of return of 8%, it will take around 9 years to double. (72/8 = 9)
The rule can be used to make reasonable estimates of a range of outcomes, such as how long it will take for money to lose value due to inflation, the impact of compounding interest on debt, and evaluating the impact of service fees.
Be careful however, at best the calculations will be estimates, reasonably accurate at rates between 5% and 10%. Outside this range, the accuracy will suffer due to the non-linear nature of compounding growth.