Markov-achievable payoffs for finite-horizon decision models
Victor Pestien and
Xiaobo Wang
Stochastic Processes and their Applications, 1998, vol. 73, issue 1, 101-118
Abstract:
Consider the class of n-stage decision models with state space S, action space A, and payoff function g : (S x A)n x S --> R. The function g is Markov-achievable if for any possible set of available randomized actions and all transition laws, each plan has a corresponding Markov plan whose value is at least as good. A condition on g, called the "non-forking linear sections property", is necessary and sufficient for g to be Markov achievable. If g satisfies the slightly stronger "general linear sections property", then g can be written as a sum of products of certain simple neighboring-stage payoffs.
Keywords: Markov; decision; model; Payoff; function; Markov; plan (search for similar items in EconPapers)
Date: 1998
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0304-4149(97)00095-1
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:spapps:v:73:y:1998:i:1:p:101-118
Ordering information: This journal article can be ordered from
http://http://www.elsevier.com/wps/find/supportfaq.cws_home/regional
https://shop.elsevie ... _01_ooc_1&version=01
Access Statistics for this article
Stochastic Processes and their Applications is currently edited by T. Mikosch
More articles in Stochastic Processes and their Applications from Elsevier
Bibliographic data for series maintained by Catherine Liu ().