A marketer’s guide to Operational Continuous Improvement measures.

A marketer’s guide to Operational Continuous Improvement measures.

Many owners of small manufacturing businesses, up to about 30 employees in my experience, have only a vague grasp of the measures and mechanics of continuous improvement. Having a stable process, then experimenting to do the small things better, every time you do them. The impact compounds. Lean Manufacturing and Six Sigma offer practical tools to boost performance, reduce costs, and improve your ability to serve customers.

Below are 9 key measures for continuous improvement. Pick the few that are most relevant to you and focus on them.

Overall Equipment Effectiveness (OEE)

OEE shows how effectively your equipment runs by combining machine availability, performance, and quality into one simple metric.

Inefficient or underperforming machines will quickly create bottlenecks in your operation. The whole chain can only go as fast as the slowest link, so identifying those bottlenecks and earmarking them for attention will improve overall effectiveness.

In these days of cheap digital sensors and data collection tools it is becoming easier and cheaper to instal machine sensors, downtime logs, and quality checks to monitor uptime, output rates, and defects.

Cycle Time

Cycle time measures the time it takes to complete a process, from start to finish. Shorter cycles mean more output without extra costs.

The measure can be applied to an individual part of the chain, or the whole chain, using a tool as simple as a stopwatch, or as complex as a SCADA system.

This measure is not to be confused with Takt time, which is a measure of the rate of demand.

First Pass Yield (FPY)

‘Get it right first time’ is a cliché that refers to first pass yield. It tells you how many products come out within specifications the first time, helping cut down on rework, scrap, and wasted effort. The principle of the measure is simple, but the trap is in making it too easy. A wide spread of acceptable specifications is more easily met than a narrow one, and will distort the measure, possibly giving you a wrong picture of quality performance.

There is a myriad of ways to check quality ‘at source’ i.e.: from random checks to sophisticated visual and digital mechanisms.

Lead Time

Lead time normally measures how fast you fulfill orders. It can also be usefully applied to parts of the supply process, such as the time taken to respond to queries, provide details, quotes, and many other points of customer interaction. Faster lead times mean happier customers, referrals and repeat business, and better cash flow. In a world that is accelerating at unprecedented rates, being quicker to respond is a powerful competitive advantage.

The easiest way to track lead times is to start automatically time-stamping everything, and tracking through spreadsheets, your CRM, or even by hand.

Reversing the focus of lead time, and measuring your suppliers lead times, and DIFOT (explained below) is also a powerful way of managing improvement in your operations, and therefore ability to serve customers.

Inventory Turnover

In simple terms, Inventory Turnover is how many times your inventory is sold and replaced over a specific period. It is calculated using the average inventory value in a period and your Cost of Goods Sold. The simple formula is COGS divided by Average inventory.

Accountants see inventory as an asset, that is how it is treated in the balance sheet. However, as inventory is a measure of how much cash you have tied up, immobile, it is to my mind a liability beyond a delicate balancing point that is necessary to serve customers. Too much inventory ties up cash and risks obsolescence, too little causes delays. Balance is key.

There are many inventory systems, all do the same thing. Monitor stock levels, keep track of the value, and usually flag repurchase time based on usage and nominated procurement lead times when fed sales forecasts.

Inventory turnover is often expressed as ‘Days cover’ in fast moving environments.  The formula is the same, the period is days.

Scrap, Rework and Waste Rates

Waste eats into profit. You expend time and resources to add to the scrap pile. Anything that reduces waste, scrap and rework will boost efficiency and margins.

Scrap is when you simply send a completed or partially completed item to the bin. Rework is when you invest further time and effort to turn a unit that could be scrapped into a saleable unit, and waste is the material left at the bottom of the ingredient bag, the leftover material after the templates have been stamped out. Each is different, each warrants attention.

As with the other measures, there are many ways of tracking these three ‘nasties’. Your accountant should be able to give you the numbers based on what is used to produce the inventory, and the difference is the place to start looking for the scrap and waste. Rework usually requires added time and labour which can be tracked.

Customer Complaints and Returns

Often the best source of problem identification is what your customers are telling you. A returned product can be a source of intelligence that enables you to track and pinpoint problems to be resolved before they escalate.

Keep records of customer feedback, returns, and service calls.

Equally, customer satisfaction is a useful measure, but challenging to build reliable data. Many enterprises use the Net Promoter Score method, alternatively monitoring social media feeds may deliver insight. However, when customers pay you their hard-earned money, they expect to be satisfied, just delivering what is expected is hardly reason for a party

Safety Incident Rate

Ensuring as far as possible the safety of employees is not only a moral responsibility, it is now a legal responsibility that in some jurisdictions has had the onus of proof reversed.

Factories can be dangerous, and removing as many of the sources of danger as is humanly possible is essential. Tracking safety incidents is a measure of how successful that effort has been.

Delivered In Full On Time. (DIFOT)

DIFOT is an overarching measure that pulls all the above together. Failure in your operational processes will make delivering in full on time challenging, if not impossible. It is one operational measure that should be on every KPI menu. As noted above, it is a very useful measure of the performance of your suppliers.

‘How to harness the power of real time feedback’

‘How to harness the power of real time feedback’

 

Real-Time Feedback is the objective of any effective performance management system.  We instinctively knew how to generate and leverage feedback as kids. Remember that cricket scoresheet a parent kept during a Saturday morning game? It could just as easily have been netball, hockey, soccer, or footie.

Every ball bowled was accounted for in real-time: a run, a wicket, who bowled the ball, and who was the batsman. This real-time recording enabled tactical choices at every ball. This is a ‘box score.’

By contrast, typical accounting systems look at what’s happened up to a point in time, often monthly, in arrears.

Translating real-time game results to a commercial context makes perfect sense. It enables decisions on a short-term basis that maximises outcomes.

Adapting to this change isn’t easy, as our accounting training, established processes, and regulatory systems are geared to historical data, not real-time. They use ‘standards’ and reporting templates that obscure real-time detail.

Successful businesses find ways to translate the outcomes of their actions into visible measures of real-time performance from which they can learn, iterate, and improve.

Following are six tactics you might consider implementing to improve your performance.

      • Break down your processes into their component parts, as far down as you can.
      • Identify the bottlenecks in those processes. These usually become obvious the further you break the processes down.
      • Choose the two or three key metrics that track performance of that part of the process, make them transparent via dashboards, and give the operators the power to adjust and improve.
      • Leverage technology to both do the measuring, and providing the real time feedback. This can be a simple as a digital display of unit movement down a production line, or sales orders received.
      • Start small, and build as the ‘performance bug’ bites those involved. Achieving this sense that there is a ‘performance bug’ around is a function of the leadership and resulting culture that is built.
      • Integrate the dashboards in a process I call ‘Nesting,’ so that each board builds on the ones that contribute to it. For example, a dashboard that reflects the units going past a specific point in a manufacturing process, build to one that reflects the output of that specific production line, which builds to a factory wide dashboard.

This is all easy to say, but very hard to do. However, if it was easy, everyone would be doing it

Header credit: Wikipedia. The scoresheet in the header is the scoresheet of Australia’s first innings in the Ashes test against England at the Gabba in 1994. Michael slater scored 176, mark Waugh 140, and Glenn McGrath did not disturb the scorers, shooting another duck. A perfect example of a ‘Box Score’.

The critical unasked question that can kill a ‘5-why’ analysis.

The critical unasked question that can kill a ‘5-why’ analysis.

 

‘Five Why’s’ is a commonly used tool, widely seen as one that when used well gives you answers to challenging operational problems.

Mostly it will, but what happens when the answer lies hidden outside the consideration of the effort to identify the cause-and-effect chains that lead to the problematic outcomes.

To solve any challenging problem, there are 4 stages that are used:

    • Collection of data
    • Analysis, segmentation, and classification of the data
    • Generation of a theory that might explain the condition and
    • Experiments to identify the cause of the outcomes rather than just the observations of it.

What happens when the third stage fails to produce a theory that explains under experimentation the outcome?

Go back to the basics, by looking at the data more widely, as clearly something is missing. Often it pays to reverse the process and ask yourself ‘what could have caused this outcome’ starting at the problematic result.

Years ago, Dairy Farmers limited had a monopoly in retail UHT processed long-life custard.  It was a modest sized niche market that was quite profitable. There had been several attempts by competitors to grab a piece of the action, all of which had failed.  Suddenly we started having problems at seemingly random times. When opened the custard was the consistency of water. The costs of lost production were substantial, but the far greater costs were those of the product recall from retail shelves, and loss of consumer confidence.

The condition was caused by either the presence of an enzyme called amylase, or a failure of the CIP system. Amylase is a naturally occurring enzyme in starch, which had been eliminated by processing from the complex hydrocolloid (starch) ingredient we used in the custard. We had accepted the assurances of the supplier that the ingredient supplied was amylase free, as per our specifications. We assumed therefore that the problem lay with the processing plant. The plant was torn apart several times, cleaned meticulously, and on one occasion, underwent some expensive engineering changes.

All efforts failed to fix the problem.

A valuable question to ask in this circumstance is: ‘What would have to be true to…..’ In this case, the answer would have been: ‘there is no presence of amylase in the hydrocolloid ingredient’. This may have, much earlier than it did, spark the further  question: ‘Is a test with a sensitivity level of 1 part per million a reliable indication that there is no amylase?

When we finally asked this question of ourselves, the answer was clearly ‘No’. We set about refining the test our suppliers used to a sensitivity of 1 part per 10 million. This more sensitive test showed up in a random manner, the presence of amylase in the supplied ingredient.

5-Why is a great tool. However, like any tool, it must be used by an expert in order to deliver an optimum result.

Header is courtesy of a free AI image generator, depicting some tortured engineers doing a root cause analysis..

 

A marketers explanation of ‘Capital Intensity’.

A marketers explanation of ‘Capital Intensity’.

 

A phrase I am hearing a lot in conversation with my networks is: ‘this business model is capital light‘. It seems to most aspiring entrepreneurs this is preferable to ‘Capital heavy’, for the obvious reason that the upfront cash at start-up is less. However, while useful, it also is only one way of looking at a business model and its associated strengths and weaknesses.

Capital-intensive businesses have high fixed costs compared to variable costs, making them vulnerable to a slowdown, as they are very volume sensitive. Their breakeven point is higher than businesses less capital intensive. However, once they reach that break-even point, most of the rest is profit.

The obvious contrast is between an oil refinery or steel-making plant, to an accounting or law practice. The former needs considerable capital deployed before there is any consideration of the labour, management, and raw material required for conversion. The latter requires just offices and capable personnel.

In effect, Capital Intensity is a measure of how many dollars of capital are required to generate a dollar of sales?

Capital intensity requires that the assets be procured in order to be operational. This can be a mix of cash retained from earnings, or available from shareholders, loans, or ‘outsourcing’ manufacturing to a contractor who has, or will add, capacity for ‘rent’. An additional source is from suppliers so long as your debtor days are less than your creditor days, in which case, your creditors are in effect adding to the funding of your business.

Often you will see the term ROCE or Return On Capital Employed in financial reports. This is simply the ratio of profit to capital. If you generate $1 in profit for every dollar of capital, you will have a capital efficiency ratio of 1:1.

It is a useful macro measure of the efficiency of the capital used in the business, just as it is a valid calculation of the efficiency of a machine: Revenue/Capital cost of the machine.

Successful businesses use capital to generate revenue and profits, the more successful you are, the better you have used the capital deployed.

How much capital is required to generate your profits?

How to Calculate Capital Intensity

The capital intensity formula is:

Capital Intensity = Fixed Assets / Total Revenue

Example

Imagine a company has $100,000 in fixed assets and $1,000,000 in total revenue. The company’s capital intensity would be: $100,000 / $1,000,000 = 0.1

This means that the company needs 10 cents of capital to generate every dollar of revenue.

Increasingly, the capital required early in the life of a business is reducing as digital technology evolves, removing the capital requirement as a barrier to entry to many industry segments. This is leading to a transfer from capital intensive to ‘technology intensive’, which is in turn becoming increasingly complex and expensive as technology evolves at an accelerating rate, and the business cycles become shorter.

As the old saying goes, there is never a free lunch!

 

 

 

Is 3% of GDP the right answer to our manufacturing complexity problem?

Is 3% of GDP the right answer to our manufacturing complexity problem?

 

 

As we seek to move towards 3% of GDP as a measure of the R&D in the economy, we are assuming that simply increasing the percentage will increase the output, in some sort of linear manner.

Ranking as we do at 93 on the Harvard list, squeezed between Uganda on 92, and Pakistan at 94, we need to do something different.

We have not asked the question: what changes need to be made to the multi-jurisdictional, fragmented and short-term focused system we have currently.

In my view we should.

Before we throw more effort and money into the existing system, we should be questioning if the system is able to deliver the outcomes being sought in an optimised manner.

Assuming we elect to keep the existing system, (a given I suspect) we should start by asking challenging strategic questions about the technology domains we need to focus on, that contribute to the shape of the economy we envisage in a decade or two.

That is easy to say, sadly, it is extraordinarily hard to do. It is even harder for the answers that may emerge to get any traction, by way of public awareness and funding. Without exception, the questions we must ask will run against the readily available answers that reflect just the extrapolation of the status quo, perhaps with a few wrinkles.

Inevitably, multiplying the complexity of the challenges faced will present problems with no apparent answers, or they would have been answered before. That is why the cycle from science to commercialised product is so long, in most cases, 30 years or more.

Change needs a catalyst, which usually comes from unexpected angles.

Take the development of mRNA vaccines during Covid.

To most this was a rushed and half-baked process, as we all know that the pharma innovation cycle is at least a decade, from identification of a molecule of value, through product development and increasingly demanding levels of clinical trial. Here, it happened in 18 months.

Thing is mRNA vaccine development did not happen in 18 months.

The logic of what became mRNA  was first articulated in 1956, and had been investigated continuously for the following 65 years. Suddenly the catalyst of Covid emerged, and the next decade or longer of development was compressed into the 18 months. This is simply because most of the work had been done, under the radar, and on a small scale, scientists knew it was extremely promising, they just lacked the catalyst and therefore the funds to prove it.

The question here was: can the expensive and technically very difficult production of mRNA be proved and scaled in 18 months? Clearly the answer was ‘yes’ and now we have mRNA as part of the pharma arsenal.

The PM has committed a billion dollars to developing a manufacturing plant in the Hunter that produces solar panels. On the surface, it is dumb, and has been condemned by many, including yours truly, and chair of the productivity Commission Danielle Wood.

However, what if we asked the mRNA question: Can the production of electricity from solar be re-engineered to use significantly advanced technology over what is currently available? If so, that may enable the plant to be a ‘next technology generation’ solar plant that sets a whole new standard.

The whole basis of the current argument that the investment can never be commercially viable because the Chinese have a stranglehold on the existing technology and cost structure is out the window. A new plant using new technology, delivering lower cost structures and capital productivity would make the current dominating technology redundant.

The intensity of intellectual effort required to ask and investigate these alternative questions is extreme.

The odds of one of them identifying an opportunity that is, with the benefit of hindsight, a ‘unicorn’ is tiny, so the political risk is significant. However, if we allow ourselves to be seduced by the fantasy of doing more of what has resulted in our current situation and expecting a better outcome, we will deserve the shellacking the investment will receive.

Two years ago I had a shot, and nominated three headline domains where we should be investing, and my views have not changed. Sitting under these three headlines are a host of opportunities for a focused R&D effort that should be considered by experts in the various fields, choices made, and long-term investment locked in.

Header is from the extensive StrategyAudit slide bank.

 

 

A marketers explanation of DIFOT, and its difficult sibling.

A marketers explanation of DIFOT, and its difficult sibling.

 

When you want to improve something, find a metric that drives the performance you want.

Pretty obvious, as most of us subscribe to the cliché that you get what you measure, while remembering Einstein’s observation that not all that matters can be measured.

Ultimately, what the customer thinks is crucial to success. Therefore, measuring the performance in meeting the customers’ expectations is always a good place to start measuring your performance.

Amongst my favoured measures is DIFOT.

Delivered In Full On Time.

That means not only the full order delivered on the day it is originally promised, with no errors of any sort, from quality of the product to the delivery time and accuracy of the ‘paperwork’.

DIFOT is a challenging measure, as it requires the collaboration and coordination of all the functional and operational tasks required to deliver in full on time.

As you fail to reach 100% DIFOT, as most do most of the time, at least at first, the failures are used as a source of improvement initiatives.

There is very little more important to the receipt of that next order than your performance on the previous ones. Never forget that, and measure DIFOT.

Hand in hand with DIFOT, you should also measure inventory cover.

The sibling.

You can improve DIFOT by simply increasing inventory when selling a physical product. Demand is inherently difficult to forecast, as it is the future, and entirely out of your hands. The challenge is to prevent your warehouses multiplying, and clogging the operational systems. The ideal situation is ‘make to order’, the ultimate shortening of the order to delivery cycle time.

The most common and very useful measure of inventory is ‘Days cover’. How many days of normal, average, forecast sales, whichever you prefer in your circumstances, do you have on hand to meet demand? This measure is extremely useful on a ‘by product’ basis, but when applied as an average across multiple lines with differing demand levels, can become a dangerous ‘comforter’.

Counter intuitively, the products that cause the most problems are the smaller volume ones, and new products. In both cases, demand is harder to forecast. The swings from out of stock to excess inventory can be erratic, particularly when a production line is geared to the larger volume runs of an established product as a driver of operational efficiency.

To achieve a 100% DIFOT while controlling physical inventory over an extended period is the most difficult operational challenge I have come across. As a result, it is amongst the most valuable to keep ‘front and centre’. The twin measures of DIFOT and ‘Days Cover’ are a vital element in addressing that ultimate challenge of customer service.