Testing for Neglected Nonlinearity Using Artificial Neural Networks with Many Randomized Hidden Unit Activations
Tae Hwy Lee,
Zhou Xi () and
Ru Zhang ()
Additional contact information
Zhou Xi: University of California, Riverside
Ru Zhang: University of California, Riverside
No 201411, Working Papers from University of California at Riverside, Department of Economics
Abstract:
This paper makes a simple but previously neglected point with regard to an empirical application of the test of White (1989) and Lee, White and Granger (LWG, 1993), for neglected nonlinearity in conditional mean, using the feedforward single layer artificial neural network (ANN). Because the activation parameters in the hidden layer are not identified under the null hypothesis of linearity, LWG suggested to activate the ANN hidden units based on the randomly generated activation parameters. Their Monte Carlo experiments demonstrated the excellence performance (good size and power), even if LWG considered a fairly small number (10 or 20) of random hidden unit activations. However, in this paper we note that the good size and power of Monte Carlo experiments are the average frequencies of rejecting the null hypothsis over multiple replications of the data generating process. The average over many simulations in Monte Carlo smooths out the randomness of the activations. In an empirical study, unlike in a Monte Carlo study, multiple realizations of the data are not possible or available. In this case, the ANN test is sensitive to the randomly generated activation parameters. One solution is the use of Bonferroni bounds as suggested in LWG (1993), which however still exhibit some excessive dependence on the random activations (as shown in this paper). Another solution can be to integrate the test statistic over the nuisance parameter space, for which however, bootstrap or simulation should be used to obtain the null distribution of the integrated statistic. In this paper, we consider a much simpler solution that is shown to work very well. That is, we simply increase the number of randomized hidden unit activations to a (very) large number (e.g., 1000). We show that using many randomly generated activation parameters can robustify the performance of the ANN test when it is applied to a real empirical data. This robustification is reliable and useful in practice, and can be achieved at no cost as increasing the number of random activations is almost costless given today's computer technology.
Keywords: Many; Activations.; Randomized; Nuisance; Parameters.; Boferroni; Bounds.; Principal; Components. (search for similar items in EconPapers)
JEL-codes: C1 C4 C5 (search for similar items in EconPapers)
Pages: 29 Pages
Date: 2014-09
New Economics Papers: this item is included in nep-cmp, nep-ecm and nep-ore
References: View references in EconPapers View complete reference list from CitEc
Citations:
Published in Journal of Time Series Econometrics 5(1): 61-86. May 2013.
Downloads: (external link)
https://economics.ucr.edu/repec/ucr/wpaper/201411.pdf First version, 2014 (application/pdf)
Related works:
Journal Article: Testing for Neglected Nonlinearity Using Artificial Neural Networks with Many Randomized Hidden Unit Activations (2013) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:ucr:wpaper:201411
Access Statistics for this paper
More papers in Working Papers from University of California at Riverside, Department of Economics Contact information at EDIRC.
Bibliographic data for series maintained by Kelvin Mac ().