A stochastic Gauss-Newton algorithm for regularized semi-discrete optimal transport
Bernard Bercu,
Jérémie Bigot,
Sébastien Gadat and
Emilia Siviero ()
Additional contact information
Bernard Bercu: IMB - Institut de Mathématiques de Bordeaux - UB - Université de Bordeaux - Bordeaux INP - Institut Polytechnique de Bordeaux - CNRS - Centre National de la Recherche Scientifique
Jérémie Bigot: IMB - Institut de Mathématiques de Bordeaux - UB - Université de Bordeaux - Bordeaux INP - Institut Polytechnique de Bordeaux - CNRS - Centre National de la Recherche Scientifique
Sébastien Gadat: TSE-R - Toulouse School of Economics - UT Capitole - Université Toulouse Capitole - UT - Université de Toulouse - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement
Emilia Siviero: IP Paris - Institut Polytechnique de Paris, IDS - Département Images, Données, Signal - Télécom ParisTech, S2A - Signal, Statistique et Apprentissage - LTCI - Laboratoire Traitement et Communication de l'Information - IMT - Institut Mines-Télécom [Paris] - Télécom Paris - IMT - Institut Mines-Télécom [Paris] - IP Paris - Institut Polytechnique de Paris
Post-Print from HAL
Abstract:
We introduce a new second order stochastic algorithm to estimate the entropically regularized optimal transport cost between two probability measures. The source measure can be arbitrary chosen, either absolutely continuous or discrete, while the target measure is assumed to be discrete. To solve the semi-dual formulation of such a regularized and semi-discrete optimal transportation problem, we propose to consider a stochastic Gauss-Newton algorithm that uses a sequence of data sampled from the source measure. This algorithm is shown to be adaptive to the geometry of the underlying convex optimization problem with no important hyperparameter to be accurately tuned. We establish the almost sure convergence and the asymptotic normality of various estimators of interest that are constructed from this stochastic Gauss-Newton algorithm. We also analyze their non-asymptotic rates of convergence for the expected quadratic risk in the absence of strong convexity of the underlying objective function. The results of numerical experiments from simulated data are also reported to illustrate the nite sample properties of this Gauss-Newton algorithm for stochastic regularized optimal transport, and to show its advantages over the use of the stochastic gradient descent, stochastic Newton and ADAM algorithms.
Keywords: Stochastic optimization; Stochastic Gauss-Newton algorithm; Optimal transport; Entropic regularization; Convergence of random variables. (search for similar items in EconPapers)
Date: 2022-05-19
Note: View the original document on HAL open archive server: https://hal.science/hal-03794948v1
References: View complete reference list from CitEc
Citations:
Published in Information and Inference: A Journal of the IMA, 2022, pp.1-56. ⟨10.1093/imaiai/iaac014⟩
Downloads: (external link)
https://hal.science/hal-03794948v1/document (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-03794948
DOI: 10.1093/imaiai/iaac014
Access Statistics for this paper
More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().