It’s pretty easy to observe expenditure, velocity and resourcing. It’s easy to measure these things and create impressive charts and graphs. It’s easy to set targets for manufacturing cycle-time, mean-time-between-failure for a product and response-time for a call centre. It’s pretty straightforward to assess customer opinion, supplier conformance and an increase in sales. So what?

© pressmaster / 123RF Stock Photo

© pressmaster / 123RF Stock Photo

Well, if you’re measuring these things and not doing anything about the insights gained you are just ticking boxes. If you are measuring these things and they do not align with a coherent vision or game plan then it is difficult to determine what the best response should be. If your goal is just to measure stuff then there are a multitude of tools that can help you do that. But if your goal is to improve holistically, then you need your metrics to mesh, to integrate, to paint a consistent landscape for decision-making.

In order to do this in a useful way it is helpful to look at what you are doing as a portfolio of complementary projects. Then you can cascade metrics down through the organization, through these projects, in a way that ensures outcomes are aligned and converge on the big-picture goals. And before you think this doesn’t apply to BAU, I suggest to you that any operation that is trying to improve in some way inherently contains one or more projects.

So, does it make sense to have projects competing for resources? Does it make sense for projects to deliver capabilities where the benefits cannot be realised because some other initiative is undermining them? Does it make sense to pass risk down your supply-chain in a way that ensures contracts are a battleground rather than an alliance? I think not.

A model approach is exemplified by a cascading process known as  “Objectives and Key Results”. This is reputedly a favourite for Google and, probably, many others. In my view it really is just common sense and it bewilders me that is not universally adopted. Here’s how it works:

Pick the organisation’s objective(s), determine the key results that are lead indicators for achieving those objective(s) and flow these key results down to the next layer as their objectives. Formulate appropriate key results at this layer. Repeat at successively lower layers.

A key point to note is that the objectives and associated key results must be within the sphere of control of those being held accountable; they must be able to influence the outcome. Also, to be effective, it is important that the number of simultaneous objectives is three or less (ideally just one!). Multi-tasking, with its continual context switching, really isn’t effective; it just isn’t.

To ensure constructive, collaborative behaviours it is essential that the linkage is clear so that local decisions and results can be easily traced to the overall scoreboard.

And finally, the chosen metrics (key results) should be lead indicators such that there is time to respond if the signs are not good. A commonly-used example for lead vs lag metrics is one associated with weight-loss – getting on the bathroom scales gives you a lag metric whilst measuring calories consumed and hours spent exercising gives you a couple of lead indicators.

Designing effective metrics is not simple but, hopefully, the path should now be clear. As with many things, the best approach is to do it, observe, learn and adapt.

Want some personalised insights? Click here to get started...
Posted
AuthorTrevor Lindars
CategoriesInnovation