TALiNT Partners Insights provides invaluable information that enables businesses to make informed, strategic decisions. Our curated insights are your tools for problem-solving, fostering growth, and achieving success within talent acquisition and staffing.

AI hiring tools may be filtering out the best job applicants

AI tools under scrutiny for bias in candidate screening

Content Insights

42% of companies reported using AI screening to enhance their recruiting.
AI algorithms favour candidates with certain characteristics.
Without intervention, AI could further entrench inequalities in the workplace.

Table of Contents

Information

Categories

Author

In today’s job market, candidates are often subjected to a battery of assessments, from body-language analysis to vocal evaluations, all powered by artificial intelligence (AI) recruiting software. These tools, increasingly utilised by companies worldwide, determine whether an applicant is deemed suitable or not for a position.

According to a late-2023 survey conducted by IBM among over 8,500 global IT professionals, 42% of companies reported using AI screening to enhance their recruiting and human resources processes, with an additional 40% considering its integration.

Despite hopes that AI would mitigate biases in hiring, concerns are mounting that these technologies may exacerbate inequalities. Hilke Schellmann, author of “The Algorithm: How AI Can Hijack Your Career and Steal Your Future,” and assistant professor of journalism at New York University, highlights the lack of evidence demonstrating the absence of bias or the efficacy of these tools in identifying the most qualified candidates.

Instances have surfaced where qualified applicants faced adverse outcomes due to AI-driven assessments. For instance, Anthea Mairoudhiou, a UK-based makeup artist, recounted being eliminated from consideration after an AI tool rated her body language poorly, despite performing well in skills evaluations. Similar grievances have been filed against various platforms, underscoring the opacity surrounding candidates’ rejections.

Schellmann elucidates systemic flaws within these technologies. In some cases, AI algorithms favour candidates with certain characteristics, such as hobbies associated with success, leading to the inadvertent exclusion of marginalised groups. Moreover, biased selection criteria, whether overt or opaque, perpetuate inequalities in the hiring process.

Schellmann warns that the widespread adoption of biased algorithms could potentially harm hundreds of thousands of applicants, outweighing the impact of individual human biases.

Furthermore, Schellmann’s research reveals instances where flawed AI assessments prioritise irrelevant factors over pertinent credentials, raising concerns about the reliability of these tools.

The pervasive nature of AI in recruitment amplifies these concerns. Schellmann warns that the widespread adoption of biased algorithms could potentially harm hundreds of thousands of applicants, outweighing the impact of individual human biases.

Moreover, the lack of accountability among AI vendors and companies, coupled with cost-saving incentives, exacerbates the issue. Schellmann notes a rush to market underdeveloped products, with little incentive for firms to rectify inherent flaws.

Sandra Wachter, a professor of technology and regulation at the University of Oxford’s Internet Institute, emphasises the imperative of fair and unbiased AI in recruitment. She advocates for tools like the Conditional Demographic Disparity test, designed to detect and address algorithmic biases, as a means to promote fairness and equity.

In response to these challenges, Schellmann calls for industry-wide regulation and oversight to safeguard against discriminatory practices. Without intervention, she warns that AI could further entrench inequalities in the workplace.

In navigating the complexities of AI-driven recruitment, the imperative remains clear: to ensure that technology serves as a tool for fair and equitable decision-making, rather than perpetuating systemic biases.

Share