Why hiring managers must consider AI biases in their recruitment processes
Paul Naha-Biswas, CEO and Founder of Sixley, explains why hiring managers must consider AI biases in their recruitment processes.
There are times in history when an invention changes the entire world.
In 1876, Alexander Graham Bell placed the first-ever telephone call to his assistant, Thomas Watson.
In 1879, Thomas Edison invented the lightbulb and lit up the world, and, just six years later, Karl Benz created the car and, suddenly, the planet shrunk.
Those nine years changed the trajectory of the human race. And we are now in the midst of a period of similar innovation.
The shift towards a digital-first world
Within the last 25 years, the videotape was supplanted by the DVD which itself was supplanted by streaming services.
Similarly, phones have become mobile and have dramatically shrunk in size, with the smallest having a screen size of 1.2 inches.
However, unlike inventions from the 19th century, a lot of the innovations happening today do not involve tangible items.
We now live in an age where we are less than 30 years from driverless cars becoming ubiquitous and a Google AI code can create new songs from artists that died over 50 years ago.
The shift to digital has affected every facet of life, including finding a job.
The digitisation of the recruitment industry
Recruitment no longer solely involves a hiring manager shifting through CVs sent in by applicants.
These days, it’s rare, for an applicant to apply for a job that doesn’t involve an online element, be it emailing a resume or filling out an application form through the internet.
Additionally, recruiters now often use computers to ‘read’ and filter candidate applications before they even look at them. There’s also automatic interview scheduling and a system to send out pre-selected emails at a certain stage.
It’s argued that automating the recruitment process helps hiring managers find better candidates quicker and cuts out some of the time-consuming, low-value tasks associated with finding talent.
However, it is not without its controversy.
The issue with racial biases in AI-systems
The promise of AI-systems in almost any walk of life is that they are faster, cheaper, and more accurate. Yet, the danger is that they are unaccountable and uncontestable, which only reinforces existing hierarchies and human biases.
Take Amazon’s facial recognition software, Rekognition, for example. The software was originally intended to be used by police departments across the US but was pulled after numerous studies found bias in the software that disproportionately targeted black people and other ethnic minorities.
The American Civil Liberties Union found in 2018 that the Rekognition software incorrectly matched 28 members of Congress with people who’ve been arrested for a crime and that false matches were disproportionately of people of colour, including six members of the Congressional Black Caucus.
These racial biases embedded within AI will not come as a surprise to anyone knowledgeable about the field.
In the past 12-months alone, we’ve seen several instances where AI systems based on inadequate human understanding of complex societal issues and imperfect data have been exposed – such as the A-Level exam fiasco in the UK and the placing of frontline medical staff at the back of the COVID-19 vaccine queue across the pond.
How these biases could impact the recruitment industry
In the context of the recruitment industry, these biases could result in jobseekers from disadvantaged communities being overlooked for roles for no particular reason.
This should set off alarm bells when you consider that one in three ethnic minority workers say they have been unfairly turned down for a job, compared to just one in five (19%) white workers, according to the Trade Union Congress.
And, separate research found that over half of job adverts (52%) are phrased with gendered wording which makes them biased towards men.
One solution to this problem is changing the demographic composition of the tech industry and promoting more diversity among those involved in algorithm writing.
However, with the number of women earning computer science bachelor’s degrees at US universities more than halving to 18% since 1984, we need to take more immediate actions by making training in technology more accessible and financially supported or incentivised.
Why referrals could overcome these biases
Those who have used recommendations will know that they are an invaluable way to hire new talent and attract candidates from sections of society which are often overlooked. By supporting one-to-one, social sharing and dedicated group referral boards, hiring managers can proactively invite diversity in and target workers from marginalised groups.
READ MORE:
- Ex-Google autonomous vehicle engineer stole trade secrets
- Waymo are improving autonomous vehicles with DeepMind
- Why customer conversations are vital for brand survival in a post-COVID-19 world
- What can corporates learn from digital transformation in the COVID era?
And referrals are equally crucial for jobseekers from disadvantaged backgrounds, as referred candidates are 15-20 times more likely to be hired than a jobs board applicant. A personal invitation to apply is also the best way for jobseekers to overcome imposter syndrome and get past mutant algorithms.
The hope is that AI systems in the future will be rid of human biases and will promote diversity and eradicate socio-economic deprivation. However, for the time being, human error and a lack of diversity in the tech industry means that the AI systems that are being integrated into our everyday life are perpetuating, not eradicating, these biases. Business leaders and recruiters need to be aware of these challenges and diversify their recruitment practices to ensure that no candidates are overlooked.
For more news from Top Business Tech, don’t forget to subscribe to our daily bulletin!