Optimal control of stationary Markov processes
R. Morton
Stochastic Processes and their Applications, 1973, vol. 1, issue 3, 237-249
Abstract:
Sufficient conditions are given for the optimal control of Markov processes when the control policy is stationary and the process possesses a stationary distribution. The costs are unbounded and additive, and may or may not be discounted. Applications to Semi-Markov processes are included, and the results for random walks are related to the author's previous papers on diffusion processes.
Date: 1973
References: Add references at CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/0304-4149(73)90002-1
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:spapps:v:1:y:1973:i:3:p:237-249
Ordering information: This journal article can be ordered from
http://http://www.elsevier.com/wps/find/supportfaq.cws_home/regional
https://shop.elsevie ... _01_ooc_1&version=01
Access Statistics for this article
Stochastic Processes and their Applications is currently edited by T. Mikosch
More articles in Stochastic Processes and their Applications from Elsevier
Bibliographic data for series maintained by Catherine Liu ().