Fractal steady states instochastic optimal control models
Luigi Montrucchio and
Fabio Privileggi
Annals of Operations Research, 1999, vol. 88, issue 0, 183-197
Abstract:
The paper is divided into two parts. We first extend the Boldrin and Montrucchio theorem[5] on the inverse control problem to the Markovian stochastic setting. Given a dynamicalsystem x t+1 =g(x t , z t ), we find a discount factor β * such that for each 0 > β > β * a concaveproblem exists for which the dynamical system is an optimal solution. In the second part,we use the previous result for constructing stochastic optimal control systems having fractalattractors. In order to do this, we rely on some results by Hutchinson on fractals and self‐similarities.A neo‐classical three‐sector stochastic optimal growth exhibiting the Sierpinskicarpet as the unique attractor is provided as an example. Copyright Kluwer Academic Publishers 1999
Date: 1999
References: Add references at CitEc
Citations: View citations in EconPapers (26)
Downloads: (external link)
http://hdl.handle.net/10.1023/A:1018978213041 (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:annopr:v:88:y:1999:i:0:p:183-197:10.1023/a:1018978213041
Ordering information: This journal article can be ordered from
http://www.springer.com/journal/10479
DOI: 10.1023/A:1018978213041
Access Statistics for this article
Annals of Operations Research is currently edited by Endre Boros
More articles in Annals of Operations Research from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().