Algorithmic Fairness and Social Welfare
Annie Liang and
Jay Lu
Papers from arXiv.org
Abstract:
Algorithms are increasingly used to guide high-stakes decisions about individuals. Consequently, substantial interest has developed around defining and measuring the ``fairness'' of these algorithms. These definitions of fair algorithms share two features: First, they prioritize the role of a pre-defined group identity (e.g., race or gender) by focusing on how the algorithm's impact differs systematically across groups. Second, they are statistical in nature; for example, comparing false positive rates, or assessing whether group identity is independent of the decision (where both are viewed as random variables). These notions are facially distinct from a social welfare approach to fairness, in particular one based on ``veil of ignorance'' thought experiments in which individuals choose how to structure society prior to the realization of their social identity. In this paper, we seek to understand and organize the relationship between these different approaches to fairness. Can the optimization criteria proposed in the algorithmic fairness literature also be motivated as the choices of someone from behind the veil of ignorance? If not, what properties distinguish either approach to fairness?
Date: 2024-04
New Economics Papers: this item is included in nep-ain
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2404.04424 Latest version (application/pdf)
Related works:
Journal Article: Algorithmic Fairness and Social Welfare (2024) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2404.04424
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().