Relative Maximum Likelihood updating of ambiguous beliefs
Xiaoyu Cheng
Journal of Mathematical Economics, 2022, vol. 99, issue C
Abstract:
This paper proposes and axiomatizes a new updating rule: Relative Maximum Likelihood (RML) updating for ambiguous beliefs represented by a set of priors (C). This rule takes the form of applying Bayes’ rule to a subset of C. This subset is a linear contraction of C towards its subset ascribing the maximal probability to the observed event. The degree of contraction captures the extent of willingness to discard priors based on likelihood when updating. Two well-known updating rules of multiple priors, full Bayesian (FB) and Maximum Likelihood (ML), are included as special cases of RML. An axiomatic characterization of conditional preferences generated by RML updating is provided when the preferences admit Maxmin Expected Utility representations. The axiomatization relies on weakening the axioms characterizing FB and ML. The axiom characterizing ML is identified for the first time in this paper, addressing a long-standing open question in the literature.
Keywords: Ambiguity; Updating; Maximum likelihood; Full Bayesian; Contingent reasoning; Dynamic consistency (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (5)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0304406821001488
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:mateco:v:99:y:2022:i:c:s0304406821001488
DOI: 10.1016/j.jmateco.2021.102587
Access Statistics for this article
Journal of Mathematical Economics is currently edited by Atsushi (A.) Kajii
More articles in Journal of Mathematical Economics from Elsevier
Bibliographic data for series maintained by Catherine Liu ().