The essential lesson for leaders in fiction.

The essential lesson for leaders in fiction.

There is rich wisdom to be found in fiction, although you might have to look hard to find it. There are some writers who have used fiction to deliver timeless messages.

For example, Sir Arthur Conan Doyle had his protagonist Sherlock Holmes utter some really meaningful lines. Amongst these is the classic: ‘it is a capital mistake to theorise before one has data. Instinctively one begins to twist facts to suit theories instead of theories to suit facts.’

This is confirmation bias at work. We see things that confirm what we already believe much more often and clearly than we see things that may erode or contravene our existing beliefs.

In digging for facts, data, you need to be able to ask smart questions, in some sort of order, to give some ‘shape’ to the way a problem is perceived.

  • How and why is this issue a problem?  Assembling observations, some informal information, input from customers, line workers, wherever the problem may be seen, to define that there really is a problem, not just someone having a moan.
  • When does the problem show itself? Under what circumstances is the problem to be seen, are there patterns of behaviour or circumstances that seem to be correlated? Is there any foundation to see causation?
  • Where is the problem showing up? This goes a step deeper to start defining the location of the problem, and the impact it may have.
  • What are the impacts of the problem? What are the financial, cultural, value chain, and customer impacts of the problem?
  • What is the priority in allocating resources to solve the problem? There are always more problems than there are resources to address them, and as a result, only a few get the attention they deserve. Make sure those limited resources are allocated in the best possible way.
  • What return is delivered by solving? This is way more than a financial calculation, it needs to include an assessment of how the transaction costs may be moved around. What is the impact on workflow and stakeholder engagement as they see problems being identified and removed?.
  • What other problems are uncovered by the consideration of the first one? Looking at a problem always uncovers others. Often in the process of understanding the problem, others that are the root causes show themselves for the first time. The ‘5 why’ tool is invaluable in understanding the root causes of problems, and should be in every managers toolbox.

Going back to Sherlock, one of the extremely useful observations captured the essence of Occam’s razor, when he said ‘ When you have eliminated the impossible, whatever remains, however improbable, must be the truth’

It is our job as leaders, to get at the truth, and communicate that truth widely, in a manner that it is clearly understood, and able to be acted on. So, the essential lesson, is to ask good questions.

 

.

The five simple questions for an effective After Action Review

The five simple questions for an effective After Action Review

The term ‘After Action Review’ emerged from the US military, which formalised it after facing a range of disasters in the field, from Vietnam to the Middle East. Finally, it became obvious they were repeating the same mistakes, consistently.

They should have asked an accountant earlier.

Standard good management practise after a capital expenditure project has been to review the outcomes of the planned expenditure compared to the expected outcomes. Variations in outcome to the plan needed understanding, to ensure errors in judgement were not repeated.

In my experience, it rarely happens well enough; too much corporate politics and ego are involved. However, the idea is not a new one; it just makes absolute sense, which is why you should build it into the performance management culture of your business.

Five simple questions, the first is easy, that is the plan, the following three are where the gold of improved performance hides when you dig hard enough, and ensure the lessons are well learned. The last drives future action.

  • What did you plan to make happen?
  • What actually happened?
  • What caused the difference?
  • What can we learn?
  • What specific changes will we make next time?

Such a process, embedded in your performance management culture will deliver guaranteed results. ‘Rinse and repeat’ the question process after every project. No matter how small the project may appear to be, an AAR should be automatic, simply a standard part of the process.  After a while, it will become second nature to observe the things that may cause the unexpected, plan for them, and take steps to remove them before they occur.

Therein hides one of the secrets of continuous improvement in profitability.

 

 

 

11 research traps novice marketers stumble into, regularly.

11 research traps novice marketers stumble into, regularly.

The implication of the word ‘research,’ is that you are setting out to understand something. All too often over the years, I have observed situations where that is not the case.

Market research can be a money trap, consuming resources with little or no payback. It can also be a huge capability to be leveraged for great benefit when done well.

The challenge is that it is a set of interrelated disciplines from statistics, psychology, behavioural economics, and science, and therefore requires a wide breadth of skill and acquired wisdom to be useful.

Doing commercially productive market research is a bit like learning to swim.  No matter how much you read about it, study wise texts, and observe others, until you get into the pool, immerse yourself, you will never really understand it.

Some things are relatively easy to research. Usually they are adverse outcomes that have happened, and have been quantified. The research is aimed at understanding the drivers of those adverse outcomes. More challenging is research that seeks to put a shape around the future. If this happens, what then?

Most material published on the topic is about the techniques, the templates to use. They are very useful, but fail to accommodate the realities that intrude in real commercial situations, that impact on a research outcome.

Following are some of the hard won lessons from doing marketing and market research over the last 40 years. The tools have changed dramatically in the last decade, the principals remain unchanged.

Not understanding the ‘scientific method’.

Most are familiar with ‘the scientific method’. Identify a problem, form a hypothesis, test that hypothesis, adjust the hypothesis based on the results, rinse and repeat. However, most do not recognise the foundation of the scientific method is to set out to disprove an idea. This objective to disprove a proposition, ensures that all relevant information is made available. All contrary data, opinions and untested ideas are brought to the table for examination. It often happens that information that may be relevant is not considered. Ego, confirmation bias, existing standard procedures, and just lack of critical thinking clouds the process. Over the years I have seen piles of research that is setting out to prove a theory, and does so by, usually unconsciously, excluding data that might not confirm the proposition.

Failure to identify the problem/opportunity.

Useful research depends on providing answers to problems, or offering insight into the scale and location of opportunities. In the absence of clarity of the objective of the research, you cannot reasonably expect there to be any value delivered.

Asking poor questions.

Not all questions are created equal. Asking good questions implies that there has been enough work done to identify what is, and what is not, a good question. Also important is the manner in which the questions are asked. It is easy to generate different responses by seemingly subtle variations in the way questions are asked. Eg. ‘How big do you like the fruit pieces to be in your brand of yoghurt? This implies that the fruit in yogurt is in pieces, and ignores the possibility that the fruit may be added as a puree. Those who may prefer a homogeneous product using puree are thus precluded from giving an accurate response. Such a question would be relevant to the marketer of fruited yogurt seeking a point of differentiation, which would influence the product ingredients and choice of processing equipment.

Less than rigorous & neutral data collection & analysis.

We all know numbers can lie, we see it every day. Numbers can be used to support any proposition you choose, when managed to that end. The absence of rigor from research methodology and analysis, will lead to flawed conclusions every time.

Not knowing what you will do with the outcomes

In the absence of a clear use for the research, why do it? The answer to this is usually found amongst ego, seeking validation of a currently expressed position, or as a crutch to avoid making a decision. How often have we heard the phrase:  ‘More research is required’

Selective use of results.

Selective use of research outcomes is standard practice in many places. Parts of the research that supports a proposition are used in the presentation of a position, and any parts that do not support the position are ignored. You see this all the time in the political discourse in this country. Politicians of differing parties, taking the same research reports and claiming opposite conclusions is common. Exactly the same process exists in corporate bureaucracies.

Lack of understanding of the techniques

You do not have to be a statistician to be able to understand the outcomes of data. However, you do need to understand what the terms mean, and the implications they carry. This applies from sampling techniques, to the tools of statistical analysis and results presentation. You must understand the principals sufficiently well to be able to ask informed questions, and recognise gobbledy gook when it comes back to you.

Not considering Anthropology & Context

Anthropology might seem a bit misplaced in market research, as it is the study of behaviour in varying cultural settings. However, consider how different the answer to a question about your work might be if asked while sitting at your desk absorbed by a task, to when asked the same question while on a holiday. Same question, different context.

These days we are often allocated to teams at work that are set up to solve problems, generate ideas, or just manage work flow. How different are our reactions inside those groups, to those to which we choose to belong outside the work context, and how differently do we behave?

Conducting research in the absence of such considerations can generate misleading outcomes. E.g. Conducting research on a new piece of packaging around a group discussion table will evoke responses, and a conclusion. How different might the reactions of those same people be when confronted by the new pack while shopping in a supermarket.

Failure to understand the drivers of Behaviour

Psychology plays a huge role in the development and reporting of research. Our brains are hard wired to reduce cognitive load, so can be easily tempted to accept a conclusion not supported by research. It is relatively easy to persuade others of the veracity of a conclusion, simply by the manner in which they are presented. E.g. Which milk is better for you: one that contains 3% fat, or one that is 97% fat free? They are identical products, but in research, a significant majority will answer ‘B’: 97% fat free.

Similarly in a qualitative group discussion, a proposition seemingly supported by most around the table can gather overwhelming support, irrespective of the accuracy of the proposition. This outcome has been repeated endlessly in first year psychology experiments, based on Solomon Asch’s 1951 experiment seeking to examine the power of a group to influence the expressed opinion of an individual.

What people say they do and what they actually do can be very different.

When you ask questions, they are answered from within the existing frame of reference of those being questioned. Their ‘mental Models’ dominate how they see things.  Henry Ford was right when he quipped: ‘ He would not consult customers on what they wanted because he already knew, a faster horse. Steve Jobs expressed exactly the same opinion, in different words on several occasions, and was again proven correct.

Too much research is aimed at connecting the future dots to give a sense of certainty about the future, just to make people feel more comfortable. If we could tell the future accurately, we would all be at the local casino for a few nights until we got banned for winning too much.

Respecting the status quo too much

We humans are keen to retain the status quo, simply because it has been proven to work, and change involves risk. We are hard wired to avoid risk, a function of evolutionary psychology, when taking risks often meant you became breakfast for something nasty. The promise of a reward must be many times stronger than the downside of a behaviour before most of us are prepared to entertain the risk.

Presenting a research finding that is inconsistent with the well known view of the Managing Director is a risky undertaking that is often avoided. This is commonly called a HiPPO (Highest Paid Persons Opinion) and is pervasive. It is particularly challenging when the person concerned (often a bloke) is repeating the opinion of someone else. In consumer products, this is often his partner.

Poor presentation of results & Conclusions.

The errors I have seen in presentations are myriad. However, the worst are:

  • Lack of clarity and simplicity in the conclusions, which limits useability.
  • They do not answer the question. Generally this is because the question was ambiguous, unnecessary or stated a proposition someone wanted verified.
  • Death by Powerpoint.

 

Every research project can be placed somewhere on the matrix in the header. The more right hand side and higher you go, the greater the degree of uncertainty is involved. In the bottom left quadrant, you are seeking answers that are quantifiable, things that have happened, that you are seeking to understand. Top right quadrant is the future, and contains things we do not know much, if anything, about. Often we do not even see them. Research that puts numbers against hypotheses that fall into this quadrant should not be believed. At best they are an estimate of a probability, at worst, just a WAG. (Wild Arsed Guess). What is important in these circumstances is that you understand the risks. Remember that old cliché, ‘plan for the worst, hope for the best’

 

 

A marketers explanation of ‘normalising’ your P&L.

A marketers explanation of ‘normalising’ your P&L.

 

You will not hear the term ‘normalising’ the P&L very often. When you do, it is often an indication that the business is in a frame of mind open to change.

It is a common starting point of valuing a business, a process that has two basic buckets:

  • Financial value. This is where any valuation process will start, with the numbers.
  • Strategic value. Far more qualitative than the numbers, a potential buyer will set out to put a value on such things as market share, customer profiles, geographic location, cultural fit, and so on.

Valuing a business is a complex exercise, particularly valuing the contents of the ‘strategic bucket’.

Creating a financial value is much better understood, and almost always starts at the same place:   EBITDA. Earnings Before Interest, Tax, Depreciation, and Amortisation.

EBITDA is a construction of the profit and Loss account, which reflects the trading results.  Usually the P&L is completed on a monthly basis, and so long as the classifications of the expenses remains consistent, can be used for comparisons over time to give a good picture of trends.

However, the P&L can also be the repository of all sorts of costs and activities that bear little relationship to the competitive trading health that determines the value of a business.  Therefore, an exercise to arrive at a value will seek to remove from, or add back, items that reflect more accurately the trading health. The usual term is to ‘normalise’ the P&L.

This is particularly relevant in the sale process of a private company, less subject to the rigors of governance that apply to listed companies with professional rather than family management.

The common items to be ‘normalised’ I have seen are:

Related party revenue or expenses. Purchases from, or sales to another business related in some way to the one being investigated, that are above or below market value. A common practise is for the owner of a private business to have their superannuation fund own the premises from which the business operates. The premises are then leased back to the operating business at a rate not reflective of competitive market value.

Owner bonuses and benefits. Often the owner of a private business will pay themselves and family members more than the market value of their contribution to the business. It also works in reverse, owners are sometimes the worst paid staff members, working longer hours than anyone else, just to keep the wheels turning over. These anomalies need to be ‘normalised’

Support of redundant assets. Every business has redundant assets that would be jettisoned by a new owner. This stretches from old inventory still carried on the books, to premises not utilised, to the country retreat used occasionally for a sales conference, but usually for the summer holidays of the owner. These do not realistically impact the performance of the business, and a new owner, unencumbered by the past, and by costs not associated with the trading position of the business, will remove them from the P&L.

Asset and expense recognition. Treating an expense as an asset, ‘capitalising’ an expense is a common practise that will boost short term profitability by moving items from the P&L to the balance sheet. While this practise is subject to the scrutiny of tax and accounting rules and independent audit, it is pretty common. It is particularly common in the treatment of repairs and maintenance. As with many items, the accounting treatment can be used both ways to ‘manage’ short term profitability.

One time costs. Items such as litigation, insurance claim recoveries, one-off professional fees, even charitable donations,  that are not a normal part of trading operations need to be identified and ‘normalised’ to build the picture of repeatable trading outcomes.

Inventories. Every business has inventories, for many it is a significant item. Manufacturing businesses have physical inventories in raw material, Work in Progress, and finished goods, while service providers have projects in various stages of completion. The method of valuation of inventories is subject to all sorts of shenanigans, and the amount of inventory,  subject to mismanagement, sloppy processes, and a host of other curses. Aggressive and consistent inventory valuation is a vital part of understanding the working capital needs of a business, and it often the most contested piece in the valuation puzzle.

When you have all that out of the way, you should be able to calculate a reliable figure for the  free cash flow generated, or consumed by the business. A further vital number, and the one upon which many acquisition/divestment decisions have been taken.

As a consultant, looking to help businesses improve their financial and strategic performance, I often quietly do a ‘normalisation’ exercise on a clients P&L. This process almost always offers up those difficult questions that need to be asked and answered before an improvement process can be truly effective.

Header cartoon credit: Dilbert and Scott Adams again capture the idea.

6 questions to assess: ‘How strategic is your data’?

6 questions to assess: ‘How strategic is your data’?

 

Data is inherently tactical, just numbers without intelligence. It takes structure, capability development, and governance to turn it into a useable asset that adds value. In the absence of a structure that is designed to enable the identification, analysis, and leveraging of that data, and to turn it into useable intelligence, it will remain just data.

To go about that task, ask yourself a number of questions:

What are the data flows?

Through the enterprise, who uses the data, how do they use it, and to what outcome?

Where are the interconnections that occur, to what extend are they compounding positively? Data can also compound negatively, usually because it reinforces an existing confirmation bias that is flawed.

Data is functionally agnostic, should be readily available to all, and the outcomes of use transparent so they can be built upon and compounded.

Who ‘owns’ the data?

Too many times I see the IT department generating data, and keeping to it themselves. Similarly, the finance department is guilty, as are all functions. This is usually not malicious; it is just reflecting a lack of cross functional collaboration. It is becoming more common that marketing is driving a large part of the data agenda, enabled by digital tools, but few marketers have the capability to do it effectively.

Often, there is an expectation that ‘digitisation’ of the enterprise will change the way data is used. Not so, it is no more than putting a new coat of paint on the building, unless the internal structures are changed as well, nothing really changes, you just get a few press releases and nice photos for the annual report.

What data is used?

Piles of data is generated, often collated, and distributed, or made available, but never put to productive use. Usually the missing ingredient is curiosity. Those who are curious approach the data with a ‘why’ and ‘what if’ attitude, they ask questions which identify holes in the data, drives them to be filled, and seeks new sources.

Where does the data add competitive value? Competitive value is a two sided coin. On one side is the need to keep up with what your competition is doing, to leverage the opportunities for productivity and not fall behind in your customers eyes. The second is to find ways for data, and more specifically the knowledge that comes from analysing data, to give you a competitive edge. If a proposed investment does not do at least one of these two things, why would you proceed?

How well do the data outcomes reflect alignment with strategy?

Having data and the analyses that goes with it that leads to conclusions that are inconsistent or divergent from the stated strategy must cause you to question the data, its analysis, and the strategy. In these circumstances, it makes sense to deploy the scientific method, create a hypothesis, test it, collect more data and rinse and repeat until you have alignment between the strategy and its supporting data.

Where are you on the digital adoption curve?

Data is just another asset, it requires explicit actions to build the capabilities necessary to generate, use and fund it. There has to be explicit policies and priorities given, or the investments in data development and the capabilities required, or it will not happen. There needs to be a clear picture of the structures of data domains, from engineering, finance, marketing, sales, and they need to be prioritised and organised to deliver the best return in the long term.

The tools being used to accumulate, process and analyse data are just tools, no different to the hammer that drives a nail. It is how we use them that make the difference. Tools everyone should have are those that ensure the data is both clean and robust. Decisions based on data that fails either of these ‘sanitary’ tests will be sub-optimal at best.

We have entered the digital world. Data and its organisation, funding, leveraging and governance are rapidly becoming the key to competitive survival.

How well are you, and your enterprise placed?

Header cartoon: courtesy Tom Gauld at tomgauld.com.