Naive Learning Through Probability Overmatching
Itai Arieli (),
Yakov Babichenko () and
Manuel Mueller-Frank ()
Additional contact information
Itai Arieli: Faculty of Industrial Engineering and Management, Technion–Israel Institute of Technology, 3200003 Haifa, Israel
Yakov Babichenko: Faculty of Industrial Engineering and Management, Technion–Israel Institute of Technology, 3200003 Haifa, Israel
Manuel Mueller-Frank: Department of Economics, IESE Business School, University of Navarra, 28023 Madrid, Spain
Operations Research, 2022, vol. 70, issue 6, 3420-3431
Abstract:
We analyze boundedly rational updating in a repeated interaction network model with binary actions and binary states. Agents form beliefs according to discretized DeGroot updating and apply a decision rule that assigns a (mixed) action to each belief. We first show that under weak assumptions, random decision rules are sufficient to achieve agreement in finite time in any strongly connected network. Our main result establishes that naive learning can be achieved in any large strongly connected network. That is, if beliefs satisfy a high level of inertia, then there exist corresponding decision rules coinciding with probability overmatching such that the eventual agreement action matches the true state, with a probability converging to one as the network size goes to infinity.
Keywords: Stochastic Models; naive learning; DeGroot dynamics; agreement; social networks; social learning; probability matching (search for similar items in EconPapers)
Date: 2022
References: Add references at CitEc
Citations:
Downloads: (external link)
http://dx.doi.org/10.1287/opre.2021.2202 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:70:y:2022:i:6:p:3420-3431
Access Statistics for this article
More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().