EconPapers    
Economics at your fingertips  
 

Nearly optimal stationary policies in negative dynamic programming

Rolando Cavazos-Cadena and Raúl Montes- De-Oca

Mathematical Methods of Operations Research, 1999, vol. 49, issue 3, 456 pages

Abstract: This work concerns controlled Markov chains with denumerable state space and discrete time parameter. The reward function is assumed to be≤0 and the performance of a control policy is measured by the expected total-reward criterion. Within this context, sufficient conditions are given so that the existence of a stationary policy which is ε-optimal at every state is guaranteed. Copyright Springer-Verlag Berlin Heidelberg 1999

Keywords: Key words: Markov decision processes; expected total-reward criterion; negative rewards; uniformly ε-optimal stationary policies (search for similar items in EconPapers)
Date: 1999
References: Add references at CitEc
Citations:

Downloads: (external link)
http://hdl.handle.net/10.1007/s001860050060 (text/html)
Access to full text is restricted to subscribers.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:mathme:v:49:y:1999:i:3:p:441-456

Ordering information: This journal article can be ordered from
http://www.springer.com/economics/journal/00186

DOI: 10.1007/s001860050060

Access Statistics for this article

Mathematical Methods of Operations Research is currently edited by Oliver Stein

More articles in Mathematical Methods of Operations Research from Springer, Gesellschaft für Operations Research (GOR), Nederlands Genootschap voor Besliskunde (NGB)
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-20
Handle: RePEc:spr:mathme:v:49:y:1999:i:3:p:441-456