Fifty Years of Classification and Regression Trees
Wei-Yin Loh
International Statistical Review, 2014, vol. 82, issue 3, 329-348
Abstract:
type="main" xml:id="insr12016-abs-0001"> Fifty years have passed since the publication of the first regression tree algorithm. New techniques have added capabilities that far surpass those of the early methods. Modern classification trees can partition the data with linear splits on subsets of variables and fit nearest neighbor, kernel density, and other models in the partitions. Regression trees can fit almost every kind of traditional statistical model, including least-squares, quantile, logistic, Poisson, and proportional hazards models, as well as models for longitudinal and multiresponse data. Greater availability and affordability of software (much of which is free) have played a significant role in helping the techniques gain acceptance and popularity in the broader scientific community. This article surveys the developments and briefly reviews the key ideas behind some of the major algorithms.
Date: 2014
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (42)
Downloads: (external link)
http://hdl.handle.net/10.1111/insr.12016 (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:bla:istatr:v:82:y:2014:i:3:p:329-348
Ordering information: This journal article can be ordered from
http://www.blackwell ... bs.asp?ref=0306-7734
Access Statistics for this article
International Statistical Review is currently edited by Eugene Seneta and Kees Zeelenberg
More articles in International Statistical Review from International Statistical Institute Contact information at EDIRC.
Bibliographic data for series maintained by Wiley Content Delivery ().