Prediction, Estimation, and Attribution
Bradley Efron
International Statistical Review, 2020, vol. 88, issue S1, S28-S59
Abstract:
The scientific needs and computational limitations of the twentieth century fashioned classical statistical methodology. Both the needs and limitations have changed in the twenty‐first, and so has the methodology. Large‐scale prediction algorithms—neural nets, deep learning, boosting, support vector machines, random forests—have achieved star status in the popular press. They are recognizable as heirs to the regression tradition, but ones carried out at enormous scale and on titanic datasets. How do these algorithms compare with standard regression techniques such as ordinary least squares or logistic regression? Several key discrepancies will be examined, centering on the differences between prediction and estimation or prediction and attribution (significance testing). Most of the discussion is carried out through small numerical examples.
Date: 2020
References: View complete reference list from CitEc
Citations: View citations in EconPapers (19)
Downloads: (external link)
https://doi.org/10.1111/insr.12409
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:bla:istatr:v:88:y:2020:i:s1:p:s28-s59
Ordering information: This journal article can be ordered from
http://www.blackwell ... bs.asp?ref=0306-7734
Access Statistics for this article
International Statistical Review is currently edited by Eugene Seneta and Kees Zeelenberg
More articles in International Statistical Review from International Statistical Institute Contact information at EDIRC.
Bibliographic data for series maintained by Wiley Content Delivery ().