Articles 3 min read

Is it Time to Start Trusting the Machine Again? by Barbara Hyman

How many research papers do we need to read or edicts from top class CEOs before we get the message that in every organisation, it all comes down to the people?

Adam Bryant who pens the terrific weekly column ‘The corner office’ for the NYT has interviewed a diverse pool of leaders and a common theme from 99% of his interviews with CEOs is that success correlates with hiring the best team.  

My old boss Tracey Fellows, CEO of the REA Group was also fond of saying that it’s ‘People’ that keeps her up at night more than any other business challenge.

Most hiring in most organisations relies 100% on people to make those most important decisions. Yet we do so with little objective data. Instead, we have layers upon layers of bias! and just to give you an idea of how many there are, here is a whopping full list of cognitive biases for you to check out. This somewhat exhausting article lays out in excruciating detail a plethora of cloudy, smeary and hazy biases I didn’t know could exist. It arrives at the conclusion that they are mostly unalterable and fixed, regardless of how many unconscious bias training you attend in your lifetime.

There is no scalable, efficient and reliable way to train us out of our biases. Our biases are so embedded and invisible mostly we just can’t check ourselves in the moment, to manage them.

So, how is that diversity hiring program going?

In some functions/ departments, your “Hiring for Diversity” may be going very well. However, diversity training and hiring isn’t repeatable where humans are involved.

And, if humans could be trained out of their biases, we may get more diversity in our new hires. But then, do we know that we are getting the ‘better’ hire from the applicant pool? How CAN you know if you have no method of reliably testing for what really matters for success? We rely on CVs to give us that ‘insight’. CVs that are crafted, designed, worded and reworded to ‘best-light’ the applicant. Ever appointed an Excel whizz, who on hire doesn’t know a pivot from a concatenate? Or even worse, who cannot apply logic, reasoning and critical thought?

We have all done this: Applied crude (biased) filters to screen applications …

  • Blue chip companies on their CV. Tick!
  • Stayed in their role for 2 years on average. Tick!
  • Promoted at least once inside of a (good) organisation. Tick!
  • Good school. Tick!
  • Impressive referees. Tick!

Because biases appear to be so hardwired and inalterable, it is more straightforward to remove bias from algorithms than from people. This gives AI the potential to create a future where important insights underpinning decisions such as hiring, are made in a fairer way. The machine can be trained to help you make repeatable and stable decisions.

Algorithmic bias is not the elephant in the room.

Some argue that algorithms themselves have bias. The reality is that machine learning, by its very definition is aiming to find patterns in large volumes of data, mostly latent, to support decisive actions. Removing bias is driven by what bits of training data you use to feed the machine.

Yet you can ensure there is no (or limited) bias in the machine learning. It’s all about 2 things:

1. What data is being used to build the model?
2. What are you doing to that data to build the model?

If you build models off the profile of your own talent and that talent is homogenous and monochromatic, then so will be the data model and you are back to self-reinforcing hiring. If you are using data which looks at age, gender, ethnicity and all those visible markers of bias, then sure enough, you will amplify that bias in your machine learning. Relying on internal performance data to make people decisions, that’s like layering bias-upon- bias. Similar to building a sentencing algorithm with sentencing data from the US court system, which is already biased against black men.

So instead of lumping all AI and ML into one big bucket of ‘bias’, look beneath the surface to really understand what’s going into the machine as that’s where amplification risks loom large.

To ensure you are using machine learning wisely – only use objective data which has no biodata (that means a big NO to CV and social media scraping). Test rigorously and adjust to continuously learn. And, be certain to use multiple machine learning models to continuously triangulate the model versus relying on one version of the truth.

Machines are better at learning this stuff.

Unlike trying to solve human bias, machine learning is repeatable, stable, consistent and most importantly testable. The value to the organisation is of course immense.

  • Every applicant gets a fair go at the role
  • Every applicant is assessed
  • Hire the person who will succeed vs someone your gut tells you will succeed
  • Use fewer resources to hire
  • Reduce the cost of hire

Now that’s ticking all the right boxes. It makes the possibility of objective and valid decisions available at scale, a probability.

Machine learning outcomes are testable and corrective measures remain consistent, unlike in humans. The ability to test both training data and outcome data, continuously, allows you to detect and correct the slightest bias if it ever occurs.

Soon (maybe already) you will be putting yours and your loved ones live in the hands of algorithms when you ride in that self-driven car. Algorithms are extensions to our cognitive ability helping us make better decisions, faster and consistently based on data. Even in hiring.

This article is brought to you exclusively by The Business Transformation Network, in partnership with Predictive Hire.

Hear it first

Stay up to date with our latest content and events

Watch, read or listen to content from the brightest leaders across the world of People, Process & Technology.

Find out about the latest events across Europe

Network with like-minded professionals in your industry

Find and apply for the best jobs

See content that you like?

Share your experience by joining your exclusive roundtables, or contribute to our content like industry peers.

Get involved