EconPapers    
Economics at your fingertips  
 

Entropy bounds on Bayesian learning

Olivier Gossner and Tristan Tomala

Post-Print from HAL

Abstract: An observer of a process View the MathML source believes the process is governed by Q whereas the true law is P. We bound the expected average distance between P(xt|x1,...,xt−1) and Q(xt|x1,...,xt−1) for t=1,...,n by a function of the relative entropy between the marginals of P and Q on the n first realizations. We apply this bound to the cost of learning in sequential decision problems and to the merging of Q to P.

Keywords: Bayesian learning; Repeated decision problem; Value of information; Entropy (search for similar items in EconPapers)
Date: 2008-01
References: Add references at CitEc
Citations: View citations in EconPapers (3)

Published in Journal of Mathematical Economics, 2008, 44 (1), pp.24-32. ⟨10.1016/j.jmateco.2007.04.006⟩

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
Journal Article: Entropy bounds on Bayesian learning (2008) Downloads
Working Paper: Entropy bounds on Bayesian learning (2008)
Working Paper: Entropy bounds on Bayesian learning (2008)
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:halshs-00754314

DOI: 10.1016/j.jmateco.2007.04.006

Access Statistics for this paper

More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().

 
Page updated 2025-03-19
Handle: RePEc:hal:journl:halshs-00754314