STOCHASTIC GRADIENT LEARNING AND INSTABILITY: AN EXAMPLE
Sergey Slobodyan,
Anna Bogomolova and
Dmitri Kolyuzhnov
Macroeconomic Dynamics, 2016, vol. 20, issue 3, 777-790
Abstract:
In this paper, we investigate real-time behavior of constant-gain stochastic gradient (SG) learning, using the Phelps model of monetary policy as a testing ground. We find that whereas the self-confirming equilibrium is stable under the mean dynamics in a very large region, real-time learning diverges for all but the very smallest gain values. We employ a stochastic Lyapunov function approach to demonstrate that the SG mean dynamics is easily destabilized by the noise associated with real-time learning, because its Jacobian contains stable but very small eigenvalues. We also express caution on usage of perpetual learning algorithms with such small eigenvalues, as the real-time dynamics might diverge from the equilibrium that is stable under the mean dynamics.
Date: 2016
References: Add references at CitEc
Citations: View citations in EconPapers (3)
Downloads: (external link)
https://www.cambridge.org/core/product/identifier/ ... type/journal_article link to article abstract page (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:cup:macdyn:v:20:y:2016:i:03:p:777-790_00
Access Statistics for this article
More articles in Macroeconomic Dynamics from Cambridge University Press Cambridge University Press, UPH, Shaftesbury Road, Cambridge CB2 8BS UK.
Bibliographic data for series maintained by Kirk Stebbing ().