Robust Comparative Statics with Misspecified Bayesian Learning
Aniruddha Ghosh
Papers from arXiv.org
Abstract:
We present novel monotone comparative statics results for steady-state behavior in a dynamic optimization environment with misspecified Bayesian learning. Building on \cite{ep21a}, we analyze a Bayesian learner whose prior is over parameterized transition models but is misspecified in the sense that the true process does not belong to this set. We characterize conditions that ensure monotonicity in the steady-state distribution over states, actions, and inferred models. Additionally, we provide a new monotonicity-based proof of steady-state existence, derive an upper bound on the cost of misspecification, and illustrate the applicability of our results to several environments of general interest.
Date: 2024-07, Revised 2025-03
New Economics Papers: this item is included in nep-mic
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2407.17037 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2407.17037
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().