Rationalizable Learning
Andrew Caplin,
Daniel Martin and
Philip Marx
No 30873, NBER Working Papers from National Bureau of Economic Research, Inc
Abstract:
The central question we address in this paper is: what can an analyst infer from choice data about what a decision maker has learned? The key constraint we impose, which is shared across models of Bayesian learning, is that any learning must be rationalizable. To implement this constraint, we introduce two conditions, one of which refines the mean preserving spread of Blackwell (1953) to take account for optimality, and the other of which generalizes the NIAC condition (Caplin and Dean 2015) and the NIAS condition (Caplin and Martin 2015) to allow for arbitrary learning. We apply our framework to show how identification of what was learned can be strengthened with additional assumptions on the form of Bayesian learning.
JEL-codes: D83 D91 (search for similar items in EconPapers)
Date: 2023-01
New Economics Papers: this item is included in nep-dcm and nep-mic
Note: TWP
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.nber.org/papers/w30873.pdf (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nbr:nberwo:30873
Ordering information: This working paper can be ordered from
http://www.nber.org/papers/w30873
Access Statistics for this paper
More papers in NBER Working Papers from National Bureau of Economic Research, Inc National Bureau of Economic Research, 1050 Massachusetts Avenue Cambridge, MA 02138, U.S.A.. Contact information at EDIRC.
Bibliographic data for series maintained by ().