EconPapers    
Economics at your fingertips  
 

Countable-State, Continuous-Time Dynamic Programming with Structure

Steven A. Lippman
Additional contact information
Steven A. Lippman: University of California, Los Angeles, California

Operations Research, 1976, vol. 24, issue 3, 477-490

Abstract: We consider the problem P of maximizing the expected discounted reward earned in a continuous-time Markov decision process with countable state and finite action space. (The reward rate is merely bounded by a polynomial.) By examining a sequence 〈 p N 〉 of approximating problems, each of which is a semi-Markov decision process with exponential transition rate Λ N , Λ N ↗ ∞, we are able to show that P is obtained as the limit of the P N . The value in representing P as the limit of P N is that structural properties present in each P N persist, in both the finite and the infinite horizon problem. Three queuing optimization models illustrating the method are given.

Date: 1976
References: Add references at CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
http://dx.doi.org/10.1287/opre.24.3.477 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:24:y:1976:i:3:p:477-490

Access Statistics for this article

More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:oropre:v:24:y:1976:i:3:p:477-490