DISCOUNTING LONG RUN AVERAGE GROWTH IN STOCHASTIC DYNAMIC PROGRAMS
Jorge Durán
Working Papers. Serie AD from Instituto Valenciano de Investigaciones Económicas, S.A. (Ivie)
Abstract:
Finding solutions to the Bellman equation often relies on restrictive boundedness assumptions. In this paper we develop a method of proof that allows to dispense with the assumption that returns are bounded from above. In applications our assumptions only imply that long run average (expected) growth is sufficiently discounted, in sharp contrast with classical assumptions either absolutely bounding growth or bounding each period (instead of long run) maximum (instead of average) growth. We discuss our work in relation to the literature and provide several examples.
Keywords: Dynamic Programming; Weighted Norms; Contraction Mappings; Dominated Convergence; Non Additive Recursive Functions. (search for similar items in EconPapers)
Pages: 40 pages
Date: 2002-07
References: View references in EconPapers View complete reference list from CitEc
Citations:
Published by Ivie
Downloads: (external link)
http://www.ivie.es/downloads/docs/wpasad/wpasad-2002-08.pdf Fisrt version / Primera version, 2002 (application/pdf)
Related works:
Journal Article: Discounting long run average growth in stochastic dynamic programs (2003) 
Working Paper: Discounting long run average growth in stochastic dynamic programs (2001) 
Working Paper: Discounting Long Run Average Growth in Stochastic Dynamic Programs (2000) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:ivi:wpasad:2002-08
Access Statistics for this paper
More papers in Working Papers. Serie AD from Instituto Valenciano de Investigaciones Económicas, S.A. (Ivie) Contact information at EDIRC.
Bibliographic data for series maintained by Departamento de Edición ().