HR analytics: Who's fooling who? by Max Blumberg

I'm going to argue here that many organisations using HR analytics to improve their people programmes are fooling themselves.

Let me explain: evidence-based HR analytics relies on a model something like this:

HR programme --> Competencies --> Employee performance --> Org performance

That is, you invest in workforce programmes to increase employee competencies ("the how") which in turn delivers increased employee and organisational performance ("the what").

The role of HR analytics is to calculate whether your HR programmes do in fact raise employee competencies and performance. If the analytics shows that your programmes are not improving performance, it provides guidelines on how to fine-tune them so that they do.

Faulty competency and performance management frameworks

Over the past 15 years, I've asked many conference and workshop audiences the following questions about their performance management and competency frameworks:

1. Performance management: To what extent do you believe in your organisation's performance ratings as measured say by your annual performance review? Do they objectively reflect your real behaviour, and are they a fair unbiased basis for your next promotion and salary increase (as opposed to your manager promoting whoever they feel like promoting)?

2. Competency management: To what extent do you believe that your organisation's competency framework accurately captures the competencies required for high employee performance in your organisation?

By far the vast majority of audiences tell me that they believe in neither their competency nor performance management frameworks because, for example:

1. Competency frameworks: In most cases, the competency framework was brought in from outside and not created by managers who understand the real competencies required for high performance in their particular organisational culture. Thus their managers don't believe in their competency framework and use it as a tick-box exercise. Furthermore, since the framework covers multiple job families, it is unlikely to predict performance across different job families e.g. is it really likely that salespeople and accountants require the same competencies for high performance? Research shows that these lists should contain very different competencies; yet most organisations use a "one size fits all" possibly with minor adjustments.

2. Performance management: Performance ratings are ultimately the subjective view of an all too human line manager. What chance then do employees have of an unbiased performance rating? (Vodafone is a notable exception here where evidence for performance ratings are verified by multiple people). Furthermore, many organisations use forced performance distributions meaning that only so many people can be high performers. Who can blame high performers for not believing in their performance management system or their chances of promotion when the forced distribution says "sorry but the top bucket is already full"?

GIGO: Garbage In, Garbage Out

So here's the problem: if like most people you don't believe in your organisation's competency and performance management frameworks, then you certainly aren't in a position to believe in the results of statistical analysis based on data generated by these frameworks. As the old acronym, GIGO says, Garbage In, Garbage Out.

What is the solution? I'll cover this in next week's blog but as an interim taster:

1. Stop doing HR analytics until you've fixed your frameworks. You're wasting valuable resources on analysis based on data you don't even believe in (and putting the data into an expensive database does not make the data any more valid)

2. Your need to design your own competency frameworks - one for each focal role. When I say "you", I mean that this needs to be done by your line managers (if you want them to believe in it) and facilitated by you as HR e.g. using repertory grids

3. If you accept that managers are human and that any performance ratings will therefore always be subjective, find ways to minimise the impact of subjectivity by using multiple raters/rankers.

More on this next time.

----------------------------------

Max Blumberg worked as a management consultant at Accenture in the US and South Africa before successfully founding and exiting a technology start-up followed by a PhD at Goldsmiths, University of London. Today, Max is a Visiting Professor at Leeds University Business School and founder of the Blumberg Partnership.

The Blumberg Partnership operates in the following areas: Sales Force Effectiveness, People Analytics and Artificial Intelligence Services, and Startup Advice.