Prediction intervals for Deep Neural Networks
Tullio Mancini,
Hector Calvo-Pardo and
Jose Olmo
Authors registered in the RePEc Author Service: Hector Fernando Calvo Pardo
Papers from arXiv.org
Abstract:
The aim of this paper is to propose a suitable method for constructing prediction intervals for the output of neural network models. To do this, we adapt the extremely randomized trees method originally developed for random forests to construct ensembles of neural networks. The extra-randomness introduced in the ensemble reduces the variance of the predictions and yields gains in out-of-sample accuracy. An extensive Monte Carlo simulation exercise shows the good performance of this novel method for constructing prediction intervals in terms of coverage probability and mean square prediction error. This approach is superior to state-of-the-art methods extant in the literature such as the widely used MC dropout and bootstrap procedures. The out-of-sample accuracy of the novel algorithm is further evaluated using experimental settings already adopted in the literature.
Date: 2020-10, Revised 2021-05
New Economics Papers: this item is included in nep-big, nep-cmp and nep-ecm
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2010.04044 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2010.04044
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().