EconPapers    
Economics at your fingertips  
 

Normalized Markov Decision Chains I; Sensitive Discount Optimality

Uriel G. Rothblum
Additional contact information
Uriel G. Rothblum: New York University, New York, New York

Operations Research, 1975, vol. 23, issue 4, 785-795

Abstract: In this paper we study sensitive discount optimality criteria for finite state and action, discrete time parameter, stationary generalized Markov decision chains. We extend previous results obtained by Miller and Veinott and Veinott for substochastic transition matrices to arbitrary non-negative matrices with spectral radius not exceeding one. In particular, we generalize their policy improvement algorithm for finding a stationary policy maximizing the expected discounted reward for all sufficiently small positive interest rates.

Date: 1975
References: Add references at CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
http://dx.doi.org/10.1287/opre.23.4.785 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:23:y:1975:i:4:p:785-795

Access Statistics for this article

More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:oropre:v:23:y:1975:i:4:p:785-795