EconPapers    
Economics at your fingertips  
 

Robust machine learning by median-of-means: theory and practice

Guillaume Lecué () and Mathieu Lerasle ()
Additional contact information
Guillaume Lecué: CREST; CNRS; Université Paris Saclay
Mathieu Lerasle: CNRS,département de mathématiques d’Orsay

No 2017-32, Working Papers from Center for Research in Economics and Statistics

Abstract: We introduce new estimators for robust machine learning based on median-of-means (MOM) estimators of the mean of real valued random variables. These estimators achieve optimal rates of convergence under minimal assumptions on the dataset. The dataset may also have been corrupted by outliers on which no assumption is granted. We also analyze these new estimators with standard tools from robust statistics. In particular, we revisit the concept of breakdown point. We modify the original definition by studying the number of outliers that a dataset can contain without deteriorating the estimation properties of a given estimator. This new notion of breakdown number, that takes into account the statistical performances of the estimators, is non-asymptotic in nature and adapted for machine learning purposes. We proved that the breakdown number of our estimator is of the order of number of observations * rate of convergence. For instance, the breakdown number of our estimators for the problem of estimation of a d-dimensional vector with a noise variance a² is a²d and it becomes a²s log(ed/s) when this vector has only s non-zero component. Beyond this breakdown point, we proved that the rate of convergence achieved by our estimator is number of outliers divided by number of observations. Besides these theoretical guarantees, the major improvement brought by these new estimators is that they are easily computable in practice. In fact, basically any algorithm used to approximate the standard Empirical Risk Minimizer (or its regularized versions) has a robust version approximating our estimators. On top of being robust to outliers, the "MOM version" of the algorithms are even faster than the original ones, less demanding in memory resources in some situations and well adapted for distributed datasets which makes it particularly attractive for large dataset analysis. As a proof of concept, we study many algorithms for the classical LASSO estimator. It turns out that the original algorithm can be improved a lot in practice by randomizing the blocks on which \local means" are computed at each step of the descent algorithm. A byproduct of this modification is that our algorithms come with a measure of depth of data that can be used to detect outliers, which is another major issue in Machine learning.

Pages: 49 pages
Date: 2017-11-01
New Economics Papers: this item is included in nep-big, nep-cmp and nep-ecm
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)

Downloads: (external link)
http://crest.science/RePEc/wpstorage/2017-32.pdf CREST working paper version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:crs:wpaper:2017-32

Access Statistics for this paper

More papers in Working Papers from Center for Research in Economics and Statistics Contact information at EDIRC.
Bibliographic data for series maintained by Secretariat General () and Murielle Jules Maintainer-Email : murielle.jules@ensae.Fr.

 
Page updated 2025-03-22
Handle: RePEc:crs:wpaper:2017-32