Entropy bounds on Bayesian learning
Tristan Tomala and
Olivier Gossner
Post-Print from HAL
Abstract:
An observer of a process View the MathML source believes the process is governed by Q whereas the true law is P. We bound the expected average distance between P(xt|x1,...,xt−1) and Q(xt|x1,...,xt−1) for t=1,...,n by a function of the relative entropy between the marginals of P and Q on the n first realizations. We apply this bound to the cost of learning in sequential decision problems and to the merging of Q to P.
Keywords: Bayesian learning; Repeated decision problem; Value of information; Entropy (search for similar items in EconPapers)
Date: 2008-01-01
References: Add references at CitEc
Citations: View citations in EconPapers (3)
Published in Journal of Mathematical Economics, 2008, Vol.44,n°1, pp.24-32. ⟨10.1016/j.jmateco.2007.04.006⟩
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
Journal Article: Entropy bounds on Bayesian learning (2008) 
Working Paper: Entropy bounds on Bayesian learning (2008)
Working Paper: Entropy bounds on Bayesian learning (2008)
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-00464554
DOI: 10.1016/j.jmateco.2007.04.006
Access Statistics for this paper
More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().