Neural network learning for nonlinear economies
Julian Ashwin,
Paul Beaudry and
Martin Ellison
Journal of Monetary Economics, 2025, vol. 149, issue C
Abstract:
Neural networks offer a promising tool for the analysis of nonlinear economies. In this paper, we derive conditions for the stability of nonlinear rational expectations equilibria under neural network learning. We demonstrate the applicability of the conditions in analytical and numerical examples where the nonlinearity is caused by monetary policy targeting a range, rather than a specific value, of inflation. If shock persistence is high or there is inertia in the structure of the economy, then the only rational expectations equilibria that are learnable may involve inflation spending long periods outside its target range. Neural network learning is also useful for solving and selecting between multiple equilibria and steady states in other settings, such as when there is a zero lower bound on the nominal interest rate.
Keywords: Inflation targeting; Machine learning; Neural networks; Zero lower bound (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0304393224001764
Full text for ScienceDirect subscribers only
Related works:
Working Paper: Neural Network Learning for Nonlinear Economies (2024) 
Working Paper: Neural Network Learning for Nonlinear Economies (2024) 
Working Paper: Neural Network Learning for Nonlinear Economies (2024) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:moneco:v:149:y:2025:i:c:s0304393224001764
DOI: 10.1016/j.jmoneco.2024.103723
Access Statistics for this article
Journal of Monetary Economics is currently edited by R. G. King and C. I. Plosser
More articles in Journal of Monetary Economics from Elsevier
Bibliographic data for series maintained by Catherine Liu ().