Assessing the forecast performance of models of choice
Dale O. Stahl
Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), 2018, vol. 73, issue C, 86-92
We often want to predict human behavior. It is well-known that the model that fits in-sample data best is not necessarily the model that forecasts (i.e. predicts out-of-sample) best, but we lack guidance on how to select a model for the purpose of forecasting. We illustrate the general issues and methods with the case of Rank-Dependent Expected Utility versus Expected Utility, using laboratory data and simulations. We find that poor forecasting performance is a likely outcome for typical laboratory sample sizes due to over-fitting. Finally we derive a decision-theory-based rule for selecting the best model for forecasting depending on the sample size.
Keywords: Forecast performance; Over-fitting; Cross-validation; Lottery choice (search for similar items in EconPapers)
JEL-codes: C52 C53 C91 D81 (search for similar items in EconPapers)
References: View references in EconPapers View complete reference list from CitEc
Citations Track citations by RSS feed
Downloads: (external link)
Full text for ScienceDirect subscribers only
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
Persistent link: https://EconPapers.repec.org/RePEc:eee:soceco:v:73:y:2018:i:c:p:86-92
Access Statistics for this article
Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics) is currently edited by Ofer Azar
More articles in Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics) from Elsevier
Bibliographic data for series maintained by Dana Niculescu ().