EconPapers    
Economics at your fingertips  
 

“Un”Fair Machine Learning Algorithms

Runshan Fu (), Manmohan Aseri (), Param Vir Singh () and Kannan Srinivasan ()
Additional contact information
Runshan Fu: Heinz College, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
Manmohan Aseri: Joseph M. Katz Graduate School of Business, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
Param Vir Singh: Tepper School of Business, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
Kannan Srinivasan: Tepper School of Business, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213

Management Science, 2022, vol. 68, issue 6, 4173-4195

Abstract: Ensuring fairness in algorithmic decision making is a crucial policy issue. Current legislation ensures fairness by barring algorithm designers from using demographic information in their decision making. As a result, to be legally compliant, the algorithms need to ensure equal treatment. However, in many cases, ensuring equal treatment leads to disparate impact particularly when there are differences among groups based on demographic classes. In response, several “fair” machine learning (ML) algorithms that require impact parity (e.g., equal opportunity) at the cost of equal treatment have recently been proposed to adjust for the societal inequalities. Advocates of fair ML propose changing the law to allow the use of protected class-specific decision rules. We show that the proposed fair ML algorithms that require impact parity, while conceptually appealing, can make everyone worse off, including the very class they aim to protect. Compared with the current law, which requires treatment parity, the fair ML algorithms, which require impact parity, limit the benefits of a more accurate algorithm for a firm. As a result, profit maximizing firms could underinvest in learning, that is, improving the accuracy of their machine learning algorithms. We show that the investment in learning decreases when misclassification is costly, which is exactly the case when greater accuracy is otherwise desired. Our paper highlights the importance of considering strategic behavior of stake holders when developing and evaluating fair ML algorithms. Overall, our results indicate that fair ML algorithms that require impact parity, if turned into law, may not be able to deliver some of the anticipated benefits.

Keywords: algorithmic bias; economics of artificial intelligence; fair machine learning; equal impact; equal treatment (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)

Downloads: (external link)
http://dx.doi.org/10.1287/mnsc.2021.4065 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:ormnsc:v:68:y:2022:i:6:p:4173-4195

Access Statistics for this article

More articles in Management Science from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:ormnsc:v:68:y:2022:i:6:p:4173-4195