Will user-contributed AI training data eat its own tail?
Joshua Gans
Economics Letters, 2024, vol. 242, issue C
Abstract:
This paper examines and finds that the answer is likely to be no. The environment examined starts with users who contribute based on their motives to create a public good. Their own actions determine the quality of that public good but also embed a free-rider problem. When AI is trained on that data, it can generate similar contributions to the public good. It is shown that this increases the incentive of human users to provide contributions that are more costly to supply. Thus, the overall quality of contributions from both AI and humans rises compared to human-only contributions. In situations where platform providers want to generate more contributions using explicit incentives, the rate of return on such incentives is shown to be lower in this environment.
Keywords: Artificial intelligence; Training data; User contributions; Prediction (search for similar items in EconPapers)
JEL-codes: D70 H44 O31 (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0165176524003525
Full text for ScienceDirect subscribers only
Related works:
Working Paper: Will User-Contributed AI Training Data Eat Its Own Tail? (2024) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:ecolet:v:242:y:2024:i:c:s0165176524003525
DOI: 10.1016/j.econlet.2024.111868
Access Statistics for this article
Economics Letters is currently edited by Economics Letters Editorial Office
More articles in Economics Letters from Elsevier
Bibliographic data for series maintained by Catherine Liu ().