Bayesian Learning of Noisy Markov Decision Processes
Sumeetpal Singh,
Nicolas Chopin and
Nick Whiteley
Additional contact information
Sumeetpal Singh: Crest
Nick Whiteley: Crest
No 2010-36, Working Papers from Center for Research in Economics and Statistics
Abstract:
This work addresses the problem of estimating the optimal value function in a MarkovDecision Process from observed state-action pairs. We adopt a Bayesian approach toinference, which allows both the model to be estimated and predictions about actions tobe made in a unified framework, providing a principled approach to mimicry of a controlleron the basis of observed data. A new Markov chain Monte Carlo (MCMC) sampler isdevised for simulation from the posterior distribution over the optimal value function.This step includes a parameter expansion step, which is shown to be essential for goodconvergence properties of the MCMC sampler. As an illustration, the method is appliedto learning a human controller.
Pages: 36
Date: 2010
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://crest.science/RePEc/wpstorage/2010-36.pdf Crest working paper version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:crs:wpaper:2010-36
Access Statistics for this paper
More papers in Working Papers from Center for Research in Economics and Statistics Contact information at EDIRC.
Bibliographic data for series maintained by Secretariat General () and Murielle Jules Maintainer-Email : murielle.jules@ensae.Fr.