Non-randomized policies for constrained Markov decision processes
Richard Chen () and
Eugene Feinberg
Mathematical Methods of Operations Research, 2007, vol. 66, issue 1, 165-179
Abstract:
This paper addresses constrained Markov decision processes, with expected discounted total cost criteria, which are controlled by non-randomized policies. A dynamic programming approach is used to construct optimal policies. The convergence of the series of finite horizon value functions to the infinite horizon value function is also shown. A simple example illustrating an application is presented. Copyright Springer-Verlag 2007
Keywords: Constrained Markov; Decision processes; Dynamic programming; Non-randomized policies (search for similar items in EconPapers)
Date: 2007
References: View complete reference list from CitEc
Citations: View citations in EconPapers (4)
Downloads: (external link)
http://hdl.handle.net/10.1007/s00186-006-0133-x (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:mathme:v:66:y:2007:i:1:p:165-179
Ordering information: This journal article can be ordered from
http://www.springer.com/economics/journal/00186
DOI: 10.1007/s00186-006-0133-x
Access Statistics for this article
Mathematical Methods of Operations Research is currently edited by Oliver Stein
More articles in Mathematical Methods of Operations Research from Springer, Gesellschaft für Operations Research (GOR), Nederlands Genootschap voor Besliskunde (NGB)
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().