Blackwell optimality in the class of all policies in Markov decision chains with a Borel state space and unbounded rewards
Arie Hordijk and
Alexander A. Yushkevich
Mathematical Methods of Operations Research, 1999, vol. 50, issue 3, 448 pages
Abstract:
This paper is the second part of our study of Blackwell optimal policies in Markov decision chains with a Borel state space and unbounded rewards. We prove that a stationary policy is Blackwell optimal in the class of all history-dependent policies if it is Blackwell optimal in the class of stationary policies. We also develop recurrence and drift conditions which ensure ergodicity and integrability assumptions made in the previous paper, and which are more suitable for applications. As an example we study a cash-balance model. Copyright Springer-Verlag Berlin Heidelberg 1999
Keywords: Key words: Markov decision chains; Blackwell optimality; drift and recurrence conditions (search for similar items in EconPapers)
Date: 1999
References: Add references at CitEc
Citations: View citations in EconPapers (7)
Downloads: (external link)
http://hdl.handle.net/10.1007/s001860050079 (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:mathme:v:50:y:1999:i:3:p:421-448
Ordering information: This journal article can be ordered from
http://www.springer.com/economics/journal/00186
DOI: 10.1007/s001860050079
Access Statistics for this article
Mathematical Methods of Operations Research is currently edited by Oliver Stein
More articles in Mathematical Methods of Operations Research from Springer, Gesellschaft für Operations Research (GOR), Nederlands Genootschap voor Besliskunde (NGB)
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().