Bootstrap Procedures for Recursive Estimation Schemes With Applications to Forecast Model Selection
Valentina Corradi () and
Norman Swanson ()
Additional contact information
Valentina Corradi: Queen Mary, University of London
Departmental Working Papers from Rutgers University, Department of Economics
In recent years it has become apparent that many of the classical testing procedures used to select amongst alternative economic theories and economic models are not realistic. In particular, researchers have become more aware of the fact that parameter estimation error and data dependence play a crucial role in test statistic limiting distributions, a role which had hitherto been ignored to a large extent. Given the fact that one of the primary ways for comparing di®erent models and theories is via use of predictive accuracy tests, it is perhaps not surprising that a large literature on the topic has developed over the last 10 years, including, for example, important papers by Diebold and Mariano (1995), West (1996), and White (2000). In this literature, it is quite common to compare multiple models (which are possibly all misspeci¯ed - i.e. they are all approximations of some unknown true model) in terms of their out of sample predictive ability, for given loss function. Our objectives in this paper are twofold. First, we introduce block bootstrap techniques that are (first order) valid in recursive estimation frameworks. Thereafter, we present two applications where predictive accuracy tests are made operational using our new bootstrap procedures. One of the applications outlines a consistent test for out-of-sample nonlinear Granger causality, and the other outlines a test for selecting amongst multiple alternative forecasting models, all of which may be viewed as approximations of some unknown underlying model. More speci¯cally, our examples extend the White (2000) reality check to the case of non vanishing parameter estimation error, and extend the integrated conditional moment (ICM) tests of Bierens (1982, 1990) and Bierens and Ploberger (1997) to the case of out-of-sample prediction. Of note is that in both of these examples, it is shown that appropriate re-centering of the bootstrap score is required in order to ensure that the tests are properly sized, and the need for such re-centering is shown to arise quite naturally when testing hypotheses of predictive accuracy. The results of a Monte Carlo investigation of the ICM test suggest that the bootstrap procedure proposed in this paper yield tests with reasonable ¯nite sample properties for samples with as few as 300 observations.
Keywords: Predictive; density (search for similar items in EconPapers)
JEL-codes: C22 (search for similar items in EconPapers)
Pages: 20 pages
New Economics Papers: this item is included in nep-ecm, nep-ets and nep-fin
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2) Track citations by RSS feed
Downloads: (external link)
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
Persistent link: https://EconPapers.repec.org/RePEc:rut:rutres:200418
Access Statistics for this paper
More papers in Departmental Working Papers from Rutgers University, Department of Economics Contact information at EDIRC.
Bibliographic data for series maintained by ().