SMOOTHING-BASED INITIALIZATION FOR LEARNING-TO-FORECAST ALGORITHMS
Michele Berardi () and
Jaqueson Galimberti ()
Macroeconomic Dynamics, 2019, vol. 23, issue 3, 1008-1023
Under adaptive learning, recursive algorithms are proposed to represent how agents update their beliefs over time. For applied purposes, these algorithms require initial estimates of agents perceived law of motion. Obtaining appropriate initial estimates can become prohibitive within the usual data availability restrictions of macroeconomics. To circumvent this issue, we propose a new smoothing-based initialization routine that optimizes the use of a training sample of data to obtain initials consistent with the statistical properties of the learning algorithm. Our method is generically formulated to cover different specifications of the learning mechanism, such as the least-squares and the stochastic gradient algorithms. Using simulations, we show that our method is able to speed up the convergence of initial estimates in exchange for a higher computational cost.
References: Add references at CitEc
Citations: Track citations by RSS feed
Downloads: (external link)
https://www.cambridge.org/core/product/identifier/ ... type/journal_article link to article abstract page (text/html)
Working Paper: Smoothing-based Initialization for Learning-to-Forecast Algorithms (2017)
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
Persistent link: https://EconPapers.repec.org/RePEc:cup:macdyn:v:23:y:2019:i:03:p:1008-1023_00
Access Statistics for this article
More articles in Macroeconomic Dynamics from Cambridge University Press Cambridge University Press, UPH, Shaftesbury Road, Cambridge CB2 8BS UK.
Bibliographic data for series maintained by Keith Waters ().