Assessing the forecast performance of models of choice
Dale Stahl
Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), 2018, vol. 73, issue C, 86-92
Abstract:
We often want to predict human behavior. It is well-known that the model that fits in-sample data best is not necessarily the model that forecasts (i.e. predicts out-of-sample) best, but we lack guidance on how to select a model for the purpose of forecasting. We illustrate the general issues and methods with the case of Rank-Dependent Expected Utility versus Expected Utility, using laboratory data and simulations. We find that poor forecasting performance is a likely outcome for typical laboratory sample sizes due to over-fitting. Finally we derive a decision-theory-based rule for selecting the best model for forecasting depending on the sample size.
Keywords: Forecast performance; Over-fitting; Cross-validation; Lottery choice (search for similar items in EconPapers)
JEL-codes: C52 C53 C91 D81 (search for similar items in EconPapers)
Date: 2018
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S2214804318300739
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:soceco:v:73:y:2018:i:c:p:86-92
DOI: 10.1016/j.socec.2018.02.006
Access Statistics for this article
Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics) is currently edited by Pablo Brañas Garza
More articles in Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics) from Elsevier
Bibliographic data for series maintained by Catherine Liu ().