EconPapers    
Economics at your fingertips  
 

Solving Stochastic Dynamic Programming Problems Using Rules Of Thumb

Anthony Smith

No 816, Working Paper from Economics Department, Queen's University

Abstract: This paper develops a new method for constructing approximate solutions to discrete time, infinite horizon, discounted stochastic dynamic programming problems with convex choice sets. The key idea is to restrict the decision rule to belong to a parametric class of function. The agent then chooses the best decision rule from within this class. Monte Carlo simulations are used to calculate arbitrarily precise estimates of the optimal decision rule parameters. The solution method is used to solve a version of the Brock-Mirman (1972) stochastic optimal growth model. For this model, relatively simple rules of thumb provide very good approximations to optimal behavior.

Keywords: rule of thumb; Monte Carlo simulation; numerical optimization (search for similar items in EconPapers)
Pages: 36 pages
Date: 1991-05
References: Add references at CitEc
Citations: View citations in EconPapers (9)

Downloads: (external link)
http://qed.econ.queensu.ca/working_papers/papers/qed_wp_816.pdf First version 1991 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:qed:wpaper:816

Access Statistics for this paper

More papers in Working Paper from Economics Department, Queen's University Contact information at EDIRC.
Bibliographic data for series maintained by Mark Babcock ().

 
Page updated 2025-03-31
Handle: RePEc:qed:wpaper:816