Can AI Solve The Problem of Content Moderation?
Recruitment has become an exceptionally tough industry, partly because the global talent pool is so incredibly wide and diverse, and partly because many modern job roles have become more specific, nuanced and complex.
Digital working environments and the rise of working from home has made recruitment even more intensive as even the smallest, most isolated businesses can now access an international labour market.
AI is already working behind the scenes at many recruitment services, scanning candidates probably without them even knowing it – it might even be a robot that offers you your next job.
The involvement of AI in recruitment is not as straightforward as many probably perceived some 10 years ago and has prompted a wider debate that cuts to the heart of issues such as bias, inclusion and diversity.
There is still much work to be done to overcome these issues and extract the most out of recruitment AI, but there is perhaps at least now a consensus that we are headed in the right direction.
Let’s first address the potential benefits of AI in recruitment.
The modern employment market is radically different to what it was even 20 years ago. Unemployment is generally decreasing in much of the world, and whilst wages remain stagnant for many, the job market has become exceedingly competitive. Some jobs are now receiving thousands of applicants instead of <100.
This creates a difficult situation for those on both sides of the fence; candidates may struggle to stand out even if they are genuinely the best person for the job and recruiters may struggle to find them.
AI works at scale to extract genuinely useful information for recruiters whilst also not skimming over crucial details. This should increase the odds of moving the best candidates to the top of the pile.
AI recruitment systems have the potential to be more personable and engaging, creating useful experiences for candidates regardless of whether they do or don’t get the job.
AI platforms can give everyone a fair chance, often remotely, personalising the screening process regardless of location, timezone, etc. Successful candidates can then be onboarded from the same workflow.
Manually screening potentially thousands of applicants is extremely complex and involves stacks of manual handling. All of this data has to be stored safely in accordance with privacy regulations.
Tedious recruitment tasks are automated in AI recruitment workflows, allowing recruiters to create a ‘single source of truth’.
AI can streamline recruitment compliance, allowing employers to audit data about their inclusion/exclusion for gender, ethnic groups, disabilities, etc.
This data can be explored to gain insights into what candidates the business/organisation tends to hire, why, and whether there are issues to address.
The benefits of AI in recruitment are proven in some instances and hotly contested in others.
The recruitment process is highly sensitive to human psychology, especially when it comes to bias and this is where issues tend to arise. AI is still ultimately dependent on human input.
Analysing massive quantities of data using advanced algorithms is helping recruiters source and retain qualified candidates for open positions, but this hasn’t been without its flaws in recent times. For example, it wasn’t so long ago that Amazon scrapped their AI recruitment tool for showing bias towards men.
Why? Because the tool was trained on 10 years of the company’s data – 10 years where men over-featured in tech roles compared to women. Google was at the center of their own diversity PR controversy when Timnit Gebru, the co-leader of Google’s Ethical AI team was fired for highlighting the ongoing issue of diversity in tech.
Of course, the issue of bias is not intrinsic to the AI itself, but more likely intrinsic to those who trained the AI – even if they didn’t perceive it at the time. Many issues of bias imbued in AI and machine learning projects are attributable to poor quality or ill-thought training data, which is relatively simple to iron out using pre and post-processing techniques.
Data issues are compounded by the fact that AI engineering teams are still limited in size, meaning there simply aren’t enough eyes and ears to spot and hear potential issues during the development and training phase.
Today, AI can make the selection process less biased and not more biased, or at least that’s what IBM and Talespin argue.
To understand how, we must consider the two main types of bias; conscious and unconscious bias.
Conscious bias is targeted, deliberate and specific. It involves direct prejudice imparted on individuals based on their personal or collective traits. Conscious bias is overt and obvious, not to mention illegal in many jurisdictions even when unconscious bias is not recognised. Bias issues in AI probably revolve around unconscious bias which is more pervasive and harder to gauge.
Unconscious bias is a more intangible form of bias that manifests without deliberate or specific intent. For example, everything from names, locations, photos, universities and educational institutions can trigger unconscious bias, as has been shown by numerous studies that found job success rates were higher for those with orthodox white-sounding names.
Conscious and unconscious bias are also explained by the bias iceberg below:
To negotiate both conscious and unconscious bias, not only does the training data need to be properly scrutinised for inclusion – which is no easy task – but the AI needs to go about its task of screening candidates in a way that excludes bias-provoking data.
This involves putting the precursors of unconscious bias aside (e.g. names) and instead focussing on non-identifiable information, such as work history, educational attainment and resume content. Keywords can be extracted and tallied against the job role to match candidates based on their content and profiles.
Another issue is locating enough candidate data to feed into the AI. CVs, resumes and job applications can be combined with social media profiles (e.g. LinkedIn) to enrich data, but recruiters generally need more data to utilise AI in the recruitment process effectively. The thinner the data, the more likely issues are.
One possibility is psychometric tests which can provide AIs with a wide selection of objective and empirical candidate measures to work with. Psychometric tests also yield strong potential for erasing bias, both during the screening process and the interview process.
Traditional interview questions and group interviews are a potential site for bias and may result in poor eventual candidate selection, but conversely, psychometric tests forgo the traditional interview format and have the potential to explore numerical, verbal and logical skills in a fair and transparent manner.
Some recruiters have taken this a step further. HireVue utilises computer vision to analyze how candidates answer psychometric interview questions in video interviews, even grading their facial cues and expressions, voice intonation, articulation, etc. They claim that this data helps predict whether candidates possess a comprehensive understanding of the questions – and even goes as far as scoring how effective they will likely be in the job.
What we do know is that AI is already deployed in the recruitment processes of many businesses and organisations worldwide. So, whilst a robot might not offer you the job, they might decide that you’ll succeed or fail before you even start.