Persistently optimal policies in stochastic dynamic programming with generalized discounting
Anna Jaśkiewicz,
Janusz Matkowski () and
Andrzej Nowak
MPRA Paper from University Library of Munich, Germany
Abstract:
In this paper we study a Markov decision process with a non-linear discount function. Our approach is in spirit of the von Neumann-Morgenstern concept and is based on the notion of expectation. First, we define a utility on the space of trajectories of the process in the finite and infinite time horizon and then take their expected values. It turns out that the associated optimization problem leads to a non-stationary dynamic programming and an infinite system of Bellman equations, which result in obtaining persistently optimal policies. Our theory is enriched by examples.
Keywords: Stochastic dynamic programming; Persistently optimal policies; Variable discounting; Bellman equation; Resource extraction; Growth theory (search for similar items in EconPapers)
JEL-codes: C61 D90 (search for similar items in EconPapers)
Date: 2011-06-21
New Economics Papers: this item is included in nep-dge and nep-ore
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://mpra.ub.uni-muenchen.de/31755/1/MPRA_paper_31755.pdf original version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:pra:mprapa:31755
Access Statistics for this paper
More papers in MPRA Paper from University Library of Munich, Germany Ludwigstraße 33, D-80539 Munich, Germany. Contact information at EDIRC.
Bibliographic data for series maintained by Joachim Winter ().