Interpretable Machine Learning Using Partial Linear Models
Emmanuel Flachaire,
Sullivan Hué,
Sébastien Laurent and
Gilles Hacheme
Oxford Bulletin of Economics and Statistics, 2024, vol. 86, issue 3, 519-540
Abstract:
Despite their high predictive performance, random forest and gradient boosting are often considered as black boxes which has raised concerns from practitioners and regulators. As an alternative, we suggest using partial linear models that are inherently interpretable. Specifically, we propose to combine parametric and non‐parametric functions to accurately capture linearities and non‐linearities prevailing between dependent and explanatory variables, and a variable selection procedure to control for overfitting issues. Estimation relies on a two‐step procedure building upon the double residual method. We illustrate the predictive performance and interpretability of our approach on a regression problem.
Date: 2024
References: Add references at CitEc
Citations:
Downloads: (external link)
https://doi.org/10.1111/obes.12592
Related works:
Working Paper: Interpretable Machine Learning Using Partial Linear Models* (2023)
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:bla:obuest:v:86:y:2024:i:3:p:519-540
Ordering information: This journal article can be ordered from
http://www.blackwell ... bs.asp?ref=0305-9049
Access Statistics for this article
Oxford Bulletin of Economics and Statistics is currently edited by Christopher Adam, Anindya Banerjee, Christopher Bowdler, David Hendry, Adriaan Kalwij, John Knight and Jonathan Temple
More articles in Oxford Bulletin of Economics and Statistics from Department of Economics, University of Oxford Contact information at EDIRC.
Bibliographic data for series maintained by Wiley Content Delivery (contentdelivery@wiley.com).