EconPapers    
Economics at your fingertips  
 

Can Socially Minded Governance Control the Artificial General Intelligence Beast?

Joshua Gans

Management Science, 2025, vol. 71, issue 10, 8188-8199

Abstract: This paper robustly concludes that it cannot. A model is constructed under idealized conditions that presume that the risks associated with artificial general intelligence (AGI) are real, that safe AGI products are possible, and that there exist socially minded funders who are interested in funding safe AGI, even if this does not maximize profits. It is demonstrated that a socially minded entity formed by such funders would not be able to minimize harm from AGI that unrestricted products released by for-profit firms might create. The reason is that a socially minded entity can only minimize the use of unrestricted AGI products in ex post competition with for-profit firms at a prohibitive financial cost and so, does not preempt the AGI developed by for-profit firms ex ante.

Keywords: artificial general intelligence; existential risk; governance; social objectives (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/mnsc.2024.05529 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:ormnsc:v:71:y:2025:i:10:p:8188-8199

Access Statistics for this article

More articles in Management Science from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-10-10
Handle: RePEc:inm:ormnsc:v:71:y:2025:i:10:p:8188-8199