EconPapers    
Economics at your fingertips  
 

Agentic AI and hallucinations

Engin Iyidogan and Ali I. Ozkes

Economics Letters, 2025, vol. 255, issue C

Abstract: We model a competitive market where AI agents buy answers from upstream generative models and resell them to users who differ in how much they value accuracy and in how much they fear hallucinations. Agents can privately exert effort for costly verification to lower hallucination risks. Since interactions halt in the event of a hallucination, the threat of losing future rents disciplines effort. A unique reputational equilibrium exists under nontrivial discounting. The equilibrium effort, and thus the price, increases with the share of users who have high accuracy concerns, implying that hallucination-sensitive sectors, such as law and medicine, endogenously lead to more serious verification efforts in agentic AI markets.

Keywords: Agentic AI; Artificial intelligence; Hallucination risk; Large language models (search for similar items in EconPapers)
JEL-codes: C73 D82 L14 L15 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S016517652500357X
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:ecolet:v:255:y:2025:i:c:s016517652500357x

DOI: 10.1016/j.econlet.2025.112520

Access Statistics for this article

Economics Letters is currently edited by Economics Letters Editorial Office

More articles in Economics Letters from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-09-30
Handle: RePEc:eee:ecolet:v:255:y:2025:i:c:s016517652500357x