The Emergence of Strategic Reasoning of Large Language Models
Gavin Kader and
Dongwoo Lee
Papers from arXiv.org
Abstract:
As large language models (LLMs) have demonstrated strong reasoning abilities in structured tasks (e.g., coding and mathematics), we explore whether these abilities extend to strategic multi-agent environments. We investigate strategic reasoning capabilities -- the process of choosing an optimal course of action by predicting and adapting to others' actions -- of LLMs by analyzing their performance in three classical games from behavioral economics. Using hierarchical models of bounded rationality, we evaluate three standard LLMs (ChatGPT-4, Claude-3.5-Sonnet, Gemini 1.5) and three reasoning LLMs (OpenAI-o1, Claude-4-Sonnet-Thinking, Gemini Flash Thinking 2.0). Our results show that reasoning LLMs exhibit superior strategic reasoning compared to standard LLMs (which do not demonstrate substantial capabilities) and often match or exceed human performance; this represents the first and thus most fundamental transition in strategic reasoning capabilities documented in LLMs. Since strategic reasoning is fundamental to future AI systems (including Agentic AI), our findings demonstrate the importance of dedicated reasoning capabilities in achieving effective strategic reasoning.
Date: 2024-12, Revised 2025-10
New Economics Papers: this item is included in nep-ain, nep-big, nep-cmp, nep-evo and nep-neu
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2412.13013 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2412.13013
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().