EconPapers    
Economics at your fingertips  
 

The Artificial Recruiter: Risks of Discrimination in Employers’ Use of AI and Automated Decision‐Making

Stefan Larsson, James Merricks White and Claire Ingram Bogusz
Additional contact information
Stefan Larsson: Department for Technology and Society, Lund University, Sweden
James Merricks White: Department for Technology and Society, Lund University, Sweden
Claire Ingram Bogusz: Department of Informatics and Media, Uppsala University, Sweden

Social Inclusion, 2024, vol. 12

Abstract: Extant literature points to how the risk of discrimination is intrinsic to AI systems owing to the dependence on training data and the difficulty of post hoc algorithmic auditing. Transparency and auditability limitations are problematic both for companies’ prevention efforts and for government oversight, both in terms of how artificial intelligence (AI) systems function and how large‐scale digital platforms support recruitment processes. This article explores the risks and users’ understandings of discrimination when using AI and automated decision‐making (ADM) in worker recruitment. We rely on data in the form of 110 completed questionnaires with representatives from 10 of the 50 largest recruitment agencies in Sweden and representatives from 100 Swedish companies with more than 100 employees (“major employers”). In this study, we made use of an open definition of AI to accommodate differences in knowledge and opinion around how AI and ADM are understood by the respondents. The study shows a significant difference between direct and indirect AI and ADM use, which has implications for recruiters’ awareness of the potential for bias or discrimination in recruitment. All of those surveyed made use of large digital platforms like Facebook and LinkedIn for their recruitment, leading to concerns around transparency and accountability—not least because most respondents did not explicitly consider this to be AI or ADM use. We discuss the implications of direct and indirect use in recruitment in Sweden, primarily in terms of transparency and the allocation of accountability for bias and discrimination during recruitment processes.

Keywords: ADM and risks of discrimination; AI and accountability; AI and risks of discrimination; AI and transparency; artificial intelligence; automated decision‐making; discrimination in recruitment; indirect AI use; platforms and discrimination (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.cogitatiopress.com/socialinclusion/article/view/7471 (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:cog:socinc:v12:y:2024:a:7471

DOI: 10.17645/si.7471

Access Statistics for this article

Social Inclusion is currently edited by Mariana Pires

More articles in Social Inclusion from Cogitatio Press
Bibliographic data for series maintained by António Vieira () and IT Department ().

 
Page updated 2025-03-19
Handle: RePEc:cog:socinc:v12:y:2024:a:7471