Robust SGLD algorithm for solving non-convex distributionally robust optimisation problems
Ariel Neufeld,
Matthew Ng Cheng En and
Ying Zhang
Papers from arXiv.org
Abstract:
In this paper we develop a Stochastic Gradient Langevin Dynamics (SGLD) algorithm tailored for solving a certain class of non-convex distributionally robust optimisation (DRO) problems. By deriving non-asymptotic convergence bounds, we build an algorithm which for any prescribed accuracy $\varepsilon>0$ outputs an estimator whose expected excess risk is at most $\varepsilon$. As a concrete application, we consider the problem of identifying the best non-linear estimator of a given regression model involving a neural network using adversarially corrupted samples. We formulate this problem as a DRO problem and demonstrate both theoretically and numerically the applicability of the proposed robust SGLD algorithm. Moreover, numerical experiments show that the robust SGLD estimator outperforms the estimator obtained using vanilla SGLD in terms of test accuracy, which highlights the advantage of incorporating model uncertainty when optimising with perturbed samples.
Date: 2024-03, Revised 2025-03
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://arxiv.org/pdf/2403.09532 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2403.09532
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().