Continuous‐time markov decision processes with nonzero terminal reward
Kyung Y. Jo
Naval Research Logistics Quarterly, 1984, vol. 31, issue 2, 265-274
Abstract:
In this article we consider a continuous‐time Markov decision process with a denumerable state space and nonzero terminal rewards. We first establish the necessary and sufficient optimality condition without any restriction on the cost functions. The necessary condition is derived through the Pontryagin maximum principle and the sufficient condition, by the inherent structure of the problem. We introduce a dynamic programming approximation algorithm for the finite‐horizon problem. As the time between discrete points decreases, the optimal policy of the discretized problem converges to that of the continuous‐time problem in the sense of weak convergence. For the infinite‐horizon problem, a successive approximation method is introduced as an alternative to a policy iteration method.
Date: 1984
References: Add references at CitEc
Citations:
Downloads: (external link)
https://doi.org/10.1002/nav.3800310208
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:wly:navlog:v:31:y:1984:i:2:p:265-274
Access Statistics for this article
More articles in Naval Research Logistics Quarterly from John Wiley & Sons
Bibliographic data for series maintained by Wiley Content Delivery ().