EconPapers    
Economics at your fingertips  
 

On Dynamic Programming with Unbounded Rewards

Steven A. Lippman
Additional contact information
Steven A. Lippman: University of California, Los Angeles

Management Science, 1975, vol. 21, issue 11, 1225-1233

Abstract: Using the technique employed by the author in an earlier paper, the existence of an optimal stationary policy that can be obtained from the usual functional equation is again established in the presence of a bound (not necessarily polynomial) on the one-period reward of a semi-Markov decision process. This is done for both the discounted and the average cost case. In addition to allowing an uncountable state space, the law of motion of the system is rather general in that we permit any state to be reached in a single transition. There is, however, a bound on a weighted moment of the next state reached. Finally, we indicate the applicability of these results.

Date: 1975
References: Add references at CitEc
Citations: View citations in EconPapers (11)

Downloads: (external link)
http://dx.doi.org/10.1287/mnsc.21.11.1225 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:ormnsc:v:21:y:1975:i:11:p:1225-1233

Access Statistics for this article

More articles in Management Science from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:ormnsc:v:21:y:1975:i:11:p:1225-1233