Markov-Renewal Programming. I: Formulation, Finite Return Models
William S. Jewell
Additional contact information
William S. Jewell: Operations Research Center and Department of Industrial Engineering, University of California, Berkeley
Operations Research, 1963, vol. 11, issue 6, 938-948
Abstract:
A special structure in dynamic programming which has been studied by Bellman, Blackwell, D'Épenoux, Derman, Howard, Manne, Oliver, Wolfe and Dantzig, and others is the problem of programming over a Markov chain This paper extends their results and solution algorithms to programming over a Markov-renewal process---in which the intervals between transitions of the system from state i to state j are independent samples from a distribution that may depend on both i and j . For these processes, a general reward structure and a decision mechanism are postulated, the problem is to make decisions at each transition to maximize the total expected reward at the end of the planning horizon. The paper is divided into two parts. This part describes the properties of Markov-renewal processes, the reward structure, and the decision process. Algorithms for finite-horizon, and infinite-horizon models with discounting are presented. The second part will investigate the models that have infinite return.
Date: 1963
References: Add references at CitEc
Citations: View citations in EconPapers (5)
Downloads: (external link)
http://dx.doi.org/10.1287/opre.11.6.938 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:11:y:1963:i:6:p:938-948
Access Statistics for this article
More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().