How learning about harms impacts the optimal rate of artificial intelligence adoption*
Joshua S Gans
Economic Policy, 2025, vol. 40, issue 121, 199-219
Abstract:
SUMMARYThis paper examines recent proposals and research suggesting that artificial intelligence (AI) adoption should be delayed until its potential harms are fully understood. Conclusions on the social optimality of delayed AI adoption are shown to be sensitive to assumptions about the process by which regulators learn about the salience of particular harms. When such learning is by doing – based on the real-world adoption of AI – this generally favours acceleration of AI adoption to surface and react to potential harms more quickly. This case is strengthened when AI adoption is potentially reversible. This paper examines how different conclusions regarding the optimality of accelerated or delayed AI adoption influence and are influenced by other policies that may moderate AI harm.
Keywords: O33; L51 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://hdl.handle.net/10.1093/epolic/eiae053 (application/pdf)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:oup:ecpoli:v:40:y:2025:i:121:p:199-219.
Access Statistics for this article
Economic Policy is currently edited by Ghazala Azmat, Roberto Galbiati, Isabelle Mejean and Moritz Schularick
More articles in Economic Policy from CEPR, CESifo, Sciences Po Contact information at EDIRC., CES Contact information at EDIRC., MSH Contact information at EDIRC.
Bibliographic data for series maintained by Oxford University Press ().