EconPapers    
Economics at your fingertips  
 

Average Cost Optimal Stationary Policies in Infinite State Markov Decision Processes with Unbounded Costs

Linn I. Sennott
Additional contact information
Linn I. Sennott: Illinois State University, Normal, Illinois

Operations Research, 1989, vol. 37, issue 4, 626-633

Abstract: We deal with infinite state Markov decision processes with unbounded costs. Three simple conditions, based on the optimal discounted value function, guarantee the existence of an expected average cost optimal stationary policy. Sufficient conditions are the existence of a distinguished state of smallest discounted value and a single stationary policy inducing an irreducible, ergodic Markov chain for which the average cost of a first passage from any state to the distinguished state is finite. A result to verify this is also given. Two examples illustrate the ease of applying the criteria.

Keywords: dynamic programming infinite state Markov decision processes; average cost: queueing control models (search for similar items in EconPapers)
Date: 1989
References: Add references at CitEc
Citations: View citations in EconPapers (15)

Downloads: (external link)
http://dx.doi.org/10.1287/opre.37.4.626 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:37:y:1989:i:4:p:626-633

Access Statistics for this article

More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:oropre:v:37:y:1989:i:4:p:626-633