Optimal stationary policies in the vector-valued Markov decision process
Kazuyoshi Wakuta
Stochastic Processes and their Applications, 1992, vol. 42, issue 1, 149-156
Abstract:
In this paper we are concerned with the vector-valued Markov decision process and consider the characterization of optimal stationary policies among the set of all (randomized, history-dependent) policies. Using the scalarization technique developed for the vector maximizing problem in the nonlinear programming, we present a necessary condition and a (different) sufficient condition for a stationary policy to be optimal among the set of all policies.
Keywords: dynamic; programming; Markov; decision; process; multiobjective; proper; efficiency (search for similar items in EconPapers)
Date: 1992
References: Add references at CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/0304-4149(92)90031-K
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:spapps:v:42:y:1992:i:1:p:149-156
Ordering information: This journal article can be ordered from
http://http://www.elsevier.com/wps/find/supportfaq.cws_home/regional
https://shop.elsevie ... _01_ooc_1&version=01
Access Statistics for this article
Stochastic Processes and their Applications is currently edited by T. Mikosch
More articles in Stochastic Processes and their Applications from Elsevier
Bibliographic data for series maintained by Catherine Liu ().