Artificial intelligence, distributional fairness, and pivotality
Victor Klockmann,
Alicia von Schenk and
Marie Claire Villeval
European Economic Review, 2025, vol. 178, issue C
Abstract:
In the field of machine learning, the decisions of algorithms depend on extensive training data contributed by numerous, often human, sources. How does this property affect the social nature of human decisions that serve to train these algorithms? By experimentally manipulating the pivotality of individual decisions for a supervised machine learning algorithm, we show that the diffusion of responsibility weakened revealed social preferences, leading to algorithmic models favoring selfish decisions. Importantly, this phenomenon cannot be attributed to shifts in incentive structures or the presence of externalities. Rather, our results suggest that the expansive nature of Big Data fosters a sense of diminished responsibility and serves as an excuse for selfish behavior that impacts individuals and the whole society.
Keywords: Artificial intelligence; Big data; Pivotality; Distributional fairness; Experiment (search for similar items in EconPapers)
JEL-codes: C91 D10 D63 D90 O33 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0014292125001485
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:eecrev:v:178:y:2025:i:c:s0014292125001485
DOI: 10.1016/j.euroecorev.2025.105098
Access Statistics for this article
European Economic Review is currently edited by T.S. Eicher, A. Imrohoroglu, E. Leeper, J. Oechssler and M. Pesendorfer
More articles in European Economic Review from Elsevier
Bibliographic data for series maintained by Catherine Liu ().