EconPapers    
Economics at your fingertips  
 

Choosing an algorithmic fairness metric for an online marketplace: Detecting and quantifying algorithmic bias on LinkedIn

YinYin Yu and Guillaume Saint-Jacques

Papers from arXiv.org

Abstract: In this paper, we derive an algorithmic fairness metric from the fairness notion of equal opportunity for equally qualified candidates for recommendation algorithms commonly used by two-sided marketplaces. We borrow from the economic literature on discrimination to arrive at a test for detecting bias that is solely attributable to the algorithm, as opposed to other sources such as societal inequality or human bias on the part of platform users. We use the proposed method to measure and quantify algorithmic bias with respect to gender of two algorithms used by LinkedIn, a popular online platform used by job seekers and employers. Moreover, we introduce a framework and the rationale for distinguishing algorithmic bias from human bias, both of which can potentially exist on a two-sided platform where algorithms make recommendations to human users. Finally, we discuss the shortcomings of a few other common algorithmic fairness metrics and why they do not capture the fairness notion of equal opportunity for equally qualified candidates.

Date: 2022-02, Revised 2022-08
New Economics Papers: this item is included in nep-reg
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://arxiv.org/pdf/2202.07300 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2202.07300

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2025-03-19
Handle: RePEc:arx:papers:2202.07300