How hard is it to pick the right model? MCS and backtest overfitting
Diego Aparicio () and
Marcos López de Prado ()
Additional contact information
Diego Aparicio: Department of Economics, Massachusetts Institute of Technology, Postal: Cambridge, MA, USA
Marcos López de Prado: True Positive Technologies, Postal: New York, NY, USA
Algorithmic Finance, 2018, vol. 7, issue 1-2, 53-61
Abstract:
Recent advances in machine learning, artificial intelligence, and the availability of billions of high frequency data signals have made model selection a challenging and pressing need. However, most of the model selection methods available in modern finance are subject to backtest overfitting. This is the probability that one will select a financial strategy that outperforms during backtest, but underperforms in practice. We evaluate the performance of the novel model confidence set (MCS) introduced in Hansen et al. (2011a) in a simple machine learning trading strategy problem. We find that MCS is not robust to multiple testing and that it requires a very high signal-to-noise ratio to be utilizable. More generally, we raise awareness on the limitations of model selection in finance.
Keywords: Forecasting; model confidence set; machine learning; model selection; multiple testing JEL Codes: G17; C52; C53 (search for similar items in EconPapers)
JEL-codes: C00 (search for similar items in EconPapers)
Date: 2018
References: Add references at CitEc
Citations: View citations in EconPapers (2)
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:ris:iosalg:0067
Access Statistics for this article
Algorithmic Finance is currently edited by Phil Maymin
More articles in Algorithmic Finance from IOS Press
Bibliographic data for series maintained by Saskia van Wijngaarden ().