The Promise and Peril of Generative AI: Evidence from GPT as Sell-Side Analysts
Edward Li,
Min Shen,
Zhiyuan Tu and
Dexin Zhou
Papers from arXiv.org
Abstract:
Large language models (LLMs) promise to democratize financial analysis by reducing information-processing costs. Yet equal access does not ensure equal outcomes, as the locus of friction may shift from processing information to evaluating model outputs. We study GPT's earnings forecasts following corporate earnings releases and document two patterns. First, GPT's narrative attention is consistent and human-like but not always associated with higher forecast accuracy. Second, its quantitative reasoning varies substantially across contexts, challenging the view that LLMs are uniformly weak at numerical tasks. Building on these insights, we propose a diagnostic framework that links forecast accuracy to observable processing features (i.e., narrative focus, numerical reasoning, and self-assessed confidence). These indicators serve as proxies for this new form of information friction and alert investors when to exercise caution. Our study has implications for information frictions, regulatory oversight, and the economics of AI-mediated financial markets.
Date: 2024-12, Revised 2025-10
New Economics Papers: this item is included in nep-ain, nep-big and nep-cmp
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
http://arxiv.org/pdf/2412.01069 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2412.01069
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().