EconPapers    
Economics at your fingertips  
 

Machine-Learning to Trust

Ran Spiegler

Papers from arXiv.org

Abstract: Can players sustain long-run trust when their equilibrium beliefs are shaped by machine-learning methods that penalize complexity? I study a game in which an infinite sequence of agents with one-period recall decides whether to place trust in their immediate successor. The cost of trusting is state-dependent. Each player's best response is based on a belief about others' behavior, which is a coarse fit of the true population strategy with respect to a partition of relevant contingencies. In equilibrium, this partition minimizes the sum of the mean squared prediction error and a complexity penalty proportional to its size. Relative to symmetric mixed-strategy Nash equilibrium, this solution concept significantly narrows the scope for trust.

Date: 2025-07
References: Add references at CitEc
Citations:

Downloads: (external link)
http://arxiv.org/pdf/2507.10363 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2507.10363

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2025-07-26
Handle: RePEc:arx:papers:2507.10363