Keeping Score in Strategy Execution by Chris Fox


Every leader has a strategy. They might not call it a strategy; it might not be clearly articulated and thought through; it might even be a bad strategy. But somewhere they have in mind what they want to achieve, why they want to achieve it, and how they hope to achieve it.


Sadly, not all leaders are able to get their strategies executed. In "The Strategy-focused Organisation", for example, Kaplan and Norton estimate that between 70% and 90% of effectively formulated strategies are never implemented.

One of the primary reasons why strategies don't get executed is because of a lack of engagement from within the organisation. Even where leaders announce their strategies with great fanfare, initial enthusiasm is quickly replaced with apathy, and the organisation reverts to whatever it was doing before.

Contrast this with, for example, sports, where those same people give up their free time, pay money and travel sometimes great distances to watch or take part. Imagine if your staff were even half as passionate about delivering your strategy as they are about their favourite sports!

One of the reasons that sports generate so much enthusiasm is that they have clear and simple scoring systems. By and large, everyone understands exactly who won and who lost, and by how much they won or lost.

Imagine the chaos that would ensue both on and off a rugby field if the players and spectators did not all understand exactly what counted towards the point score, and how that was used to determine who had won. Imagine if, instead, a winner was simply declared, several weeks after the game by a distant body based on opaque criteria. Participation levels would drop and spectator numbers would plummet.

A clearly understood and agreed scoring system means that players can devote their energy towards training themselves and developing tactics to maximise their score. Their training and tactics will be specific to the rules of scoring in rugby, and will be very different to those of a team trying to win at another sport, such as cycling. Equally, a clear scoring system gives spectators a framework within which they can debate heatedly why their team is or isn't performing to their expectations.  

Compare this to business, where the principal methods of keeping score are usually based on arcane accounting practices, external or regulatory reporting requirements, and/or lowest common denominator industry benchmarks. These typically provide opaque results well after the reporting period is closed. Whilst most organisations these days will have some sort of scorecard comprising goals and objectives, KPIs and targets, these are often still not aligned with strategy, and frequently are hard to relate very directly to the actions teams undertake on a daily basis.


As a result, most organisations' scorecards fail to generate the levels of engagement required to get strategies delivered. In my experience this is because they suffer from two problems:

1. Lack of strategic alignment leading to unintended consequences

The UK's National Health Service (NHS) has a long and checkered history with setting goals and targets. In one example, the administration set targets for the length of time it took from when a patient was admitted to an accident and emergency (A&E) center until they were treated by a doctor. Some hospitals quickly worked out that they could more easily achieve this target by deferring admission until a doctor was available. As a result, patients were kept waiting outside in the ambulances in which they arrived: the patients were not being treated any more quickly, but now the ambulances were also not available to attend to new emergencies.

This is often cited as an example of why targets are a bad idea. But what it actually shows is that targets are very effective and that this makes it all the more important to ensure your targets are tightly aligned with your strategy. 

In this example, assuming improved patient outcomes was the desired goal, a better metric might have been the time from when the emergency services were first called to the time when the patients' treatment was concluded. However, this leads directly to the second problem:


2. Lack of availability of the right data

I suspect that this metric was not selected because that data was not readily and easily available. It would have required co-ordination between the emergency services call centers, first responders, A&E and aftercare services. I suspect that the more restricted measure, which involved only the A&E centers, was chosen because it was easier to collect that data.

Invariably, when you introduce a new strategy, the right data is not immediately available. After all, if that was not yet your strategy, what reason would you have had for collecting that data? Therefore, when introducing a new strategy, one of your first priorities should always be to implement the systems and processes required to collect the data. Given that data is almost always more reliable when it is produced directly from the systems and processes being measured (as opposed to measured and recorded after the fact) this means that your strategic measurements should be a high priority and non-negotiable requirement of any other changes you're making to deliver your strategy.

This will, of course, cost a little more time and money than if you didn't do it, but if the upside is that your strategy actually gets delivered, isn't this an investment which is well worth while?


Chris Fox is the principal consultant at Chris C Fox Consulting and founder of

StratNavApp is a free tool for collaborative strategy development and execution based on the Strategic Learning methodology.