Finding optimal memoryless policies of POMDPs under the expected average reward criterion
Yanjie Li,
Baoqun Yin and
Hongsheng Xi
European Journal of Operational Research, 2011, vol. 211, issue 3, 556-567
Abstract:
In this paper, partially observable Markov decision processes (POMDPs) with discrete state and action space under the average reward criterion are considered from a recent-developed sensitivity point of view. By analyzing the average-reward performance difference formula, we propose a policy iteration algorithm with step sizes to obtain an optimal or local optimal memoryless policy. This algorithm improves the policy along the same direction as the policy iteration does and suitable step sizes guarantee the convergence of the algorithm. Moreover, the algorithm can be used in Markov decision processes (MDPs) with correlated actions. Two numerical examples are provided to illustrate the applicability of the algorithm.
Keywords: POMDPs; Performance; difference; Policy; iteration; with; step; sizes; Correlated; actions; Memoryless; policy (search for similar items in EconPapers)
Date: 2011
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0377-2217(10)00880-5
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:ejores:v:211:y:2011:i:3:p:556-567
Access Statistics for this article
European Journal of Operational Research is currently edited by Roman Slowinski, Jesus Artalejo, Jean-Charles. Billaut, Robert Dyson and Lorenzo Peccati
More articles in European Journal of Operational Research from Elsevier
Bibliographic data for series maintained by Catherine Liu ().