Playing games with GPT: What can we learn about a large language model from canonical strategic games?
Philip Brookins () and
Jason DeBacker ()
Additional contact information
Philip Brookins: University of South Carolina
Jason DeBacker: University of South Carolina
Economics Bulletin, 2024, vol. 44, issue 1, 25 - 37
Abstract:
We aim to understand fundamental preferences over fairness and cooperation embedded in artificial intelligence (AI). We do this by having a large language model (LLM), GPT-3.5, play two classic games: the dictator game and the prisoner's dilemma game. We compare the decisions of the LLM to those of humans in laboratory experiments. We find that the LLM replicates human tendencies towards fairness and cooperation. It does not choose the optimal strategy in most cases. Rather, it shows a tendency towards fairness in the dictator game, even more so than human participants. In the prisoner's dilemma, the LLM displays rates of cooperation much higher than human participants (about 65% versus 37% for humans). These findings aid our understanding of the ethics and rationality embedded in AI.
Keywords: Large language models (LLMs); Generative Pre-trained Transformer (GPT); Experimental Economics; Game Theory; AI (search for similar items in EconPapers)
JEL-codes: C7 C9 (search for similar items in EconPapers)
Date: 2024-03-30
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://www.accessecon.com/Pubs/EB/2024/Volume44/EB-24-V44-I1-P3.pdf (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:ebl:ecbull:eb-23-00457
Access Statistics for this article
More articles in Economics Bulletin from AccessEcon
Bibliographic data for series maintained by John P. Conley ().