Swag: A Wrapper Method for Sparse Learning
Roberto Molinari,
Gaetan Bakalli,
Stéphane Guerrier,
Cesare Miglioli,
Samuel Orso and
Olivier Scaillet
Additional contact information
Roberto Molinari: Auburn University
Gaetan Bakalli: University of Geneva - Geneva School of Economics and Management
Stéphane Guerrier: University of Geneva - Geneva School of Economics and Management
Cesare Miglioli: University of Geneva - Geneva School of Economics and Management
Samuel Orso: University of Geneva - Geneva School of Economics and Management
No 20-49, Swiss Finance Institute Research Paper Series from Swiss Finance Institute
Abstract:
Predictive power has always been the main research focus of learning algorithms with the goal of minimizing the test error for supervised classification and regression problems. While the general approach for these algorithms is to consider all possible attributes in a dataset to best predict the response of interest, an important branch of research is focused on sparse learning in order to avoid overfitting which can greatly affect the accuracy of out-of-sample prediction. However, in many practical settings we believe that only an extremely small combination of different attributes affect the response whereas even sparse-learning methods can still preserve a high number of attributes in high-dimensional settings and possibly deliver inconsistent prediction performance. As a consequence, the latter methods can also be hard to interpret for researchers and practitioners, a problem which is even more relevant for the “black-box”-type mechanisms of many learning approaches. Finally, aside from needing to quantify prediction uncertainty, there is often a problem of replicability since not all data-collection procedures measure (or observe) the same attributes and therefore cannot make use of proposed learners for testing purposes. To address all the previous issues, we propose to study a procedure that combines screening and wrapper methods and aims to find a library of extremely low-dimensional attribute combinations (with consequent low data collection and storage costs) in order to (i) match or improve the predictive performance of any particular learning method which uses all attributes as an input (including sparse learners); (ii) provide a low-dimensional network of attributes easily interpretable by researchers and practitioners; and (iii) increase the potential replicability of results due to a diversity of attribute combinations defining strong learners with equivalent predictive power. We call this algorithm “Sparse Wrapper AlGorithm” (SWAG).
Keywords: interpretable machine learning; big data; wrapper; sparse learning; meta learning; ensemble learning; greedy algorithm; feature selection; variable importance network (search for similar items in EconPapers)
JEL-codes: C45 C51 C52 C53 C55 C87 (search for similar items in EconPapers)
Pages: 17 pages
Date: 2020-06
New Economics Papers: this item is included in nep-big, nep-cmp, nep-ecm and nep-ore
References: Add references at CitEc
Citations:
Downloads: (external link)
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3633843 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:chf:rpseri:rp2049
Access Statistics for this paper
More papers in Swiss Finance Institute Research Paper Series from Swiss Finance Institute Contact information at EDIRC.
Bibliographic data for series maintained by Ridima Mittal ().