
21 Jun Does AI Solve the Hiring Bias Problem?
It’s no secret now humans often make biased decisions in the hiring process. This widely cited study showed recruiters selecting significantly more “white sounding” last names than “black sounding” last names even though both groups had similar credentials on paper, resulting in hiring bias.
So it is clear humans are the problem. But does that mean AI is the answer?
There’s no question the injection of AI into the recruiting industry is exciting for job seekers and firms alike. But it is precisely at this time, when many new players are newly exploring the tremendous opportunities that are at hand, that developers and policymakers must be doubly cautious to assure ethical standards are adhered to in the development of new hiring technologies powered by AI.
Advocates for the AI movement say the key is to focus on identifying “raw talent” irrespective of the candidates name. Instead of filtering through skills and knowledge, “talent” encapsulates inherent characteristics that aren’t subject to bias such as teamwork, learning agility and resilience.
A major advantage through using AI and it’s accompanied data sets is that it can help identify softer skills and find talent that would have never been identified through a traditional hiring process. Measuring core competencies rather than accumulated experience helps identify talent that will simply perform on the job. So it’s much more than just a streamlined method to connect job seekers and job hunters. With the vast population of people looking for new opportunities and an eager bevy of employers looking to snap up the best and brightest this trend in recruiting algorithms will only grow.
But now we turn to the underlying problem with AI when used to substitute hiring decision makers: algorithmic discrimination. Through it’s creators, human bias can easily be unwittingly propagated through AI, particularly if its designers are not careful in how they select input data and how they craft the underlying algorithms.
If for example, the AI is implemented in video interviewing through facial analysis, word choice, gestures and voice inflections and the developers through oversight make errors then unintentional bias can creep into the data input and machine learning build processes. Failure to identify selection bias and then introducing it to the AI only perpetuates biases that limit the opportunities for certain job seeker groups.
There is no way to remove humans from the equation, but through careful data gathering and a large diverse group of engineers and developers corporate algorithms can indeed be masterpieces free from any sort of hiring bias. When AI is put to work for recruiting ethically and transparently while being built on proven psychometric principle, it can encourage better candidate fits, promote fairer interview screening, and increase overall efficiency.
No Comments