Estimation and inference in adaptive learning models with slowly decreasing gains
Alexander Mayer
Journal of Time Series Analysis, 2022, vol. 43, issue 5, 720-749
Abstract:
An asymptotic theory for estimation and inference in adaptive learning models with strong mixing regressors and martingale difference innovations is developed. The maintained polynomial gain specification provides a unified framework which permits slow convergence of agents' beliefs and contains recursive least squares as a prominent special case. Reminiscent of the classical literature on co‐integration, an asymptotic equivalence between two approaches to estimation of long‐run equilibrium and short‐run dynamics is established. Notwithstanding potential threats to inference arising from non‐standard convergence rates and a singular variance–covariance matrix, hypotheses about single, as well as joint restrictions remain testable. Monte Carlo evidence confirms the accuracy of the asymptotic theory in finite samples.
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
https://doi.org/10.1111/jtsa.12636
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:bla:jtsera:v:43:y:2022:i:5:p:720-749
Ordering information: This journal article can be ordered from
http://www.blackwell ... bs.asp?ref=0143-9782
Access Statistics for this article
Journal of Time Series Analysis is currently edited by M.B. Priestley
More articles in Journal of Time Series Analysis from Wiley Blackwell
Bibliographic data for series maintained by Wiley Content Delivery ().