Dynamic Programming for a Stochastic Markovian Process with an Application to the Mean Variance Models
Juval Goldwerger
Additional contact information
Juval Goldwerger: Bar-Ilan University
Management Science, 1977, vol. 23, issue 6, 612-620
Abstract:
This paper presents a fresh perspective on the Markov reward process. In order to bring Howard's [Howard, R. A. 1969. Dynamic Programing and Markov-Process. The M.I.T. Press, 5th printing.] model closer to practical applicability, two very important aspects of the model are restated: (a) We make the rewards random variables instead of known constants, and (b) we allow for any decision rule over the moment set of the portfolio distribution, rather than assuming maximization of the expected value of the portfolio outcome. These modifications provide a natural setting for the rewards to be normally distributed, and thus, applying the mean variance models becomes possible. An algorithm for solution is presented, and a special case: the mean-variability models decision rule of maximizing (\mu /\sigma) is worked out in detail.
Date: 1977
References: Add references at CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://dx.doi.org/10.1287/mnsc.23.6.612 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:ormnsc:v:23:y:1977:i:6:p:612-620
Access Statistics for this article
More articles in Management Science from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().