Gradient Descent Provably Escapes Saddle Points in the Training of Shallow ReLU Networks
Patrick Cheridito,
Arnulf Jentzen () and
Florian Rossmannek
Additional contact information
Patrick Cheridito: ETH Zurich
Arnulf Jentzen: The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen)
Florian Rossmannek: ETH Zurich
Journal of Optimization Theory and Applications, 2024, vol. 203, issue 3, No 19, 2617-2648
Abstract:
Abstract Dynamical systems theory has recently been applied in optimization to prove that gradient descent algorithms bypass so-called strict saddle points of the loss function. However, in many modern machine learning applications, the required regularity conditions are not satisfied. In this paper, we prove a variant of the relevant dynamical systems result, a center-stable manifold theorem, in which we relax some of the regularity requirements. We explore its relevance for various machine learning tasks, with a particular focus on shallow rectified linear unit (ReLU) and leaky ReLU networks with scalar input. Building on a detailed examination of critical points of the square integral loss function for shallow ReLU and leaky ReLU networks relative to an affine target function, we show that gradient descent circumvents most saddle points. Furthermore, we prove convergence to global minima under favourable initialization conditions, quantified by an explicit threshold on the limiting loss.
Keywords: Neural networks; Center-stable manifolds; Gradient descent; Nonconvex optimization; 68T07; 37D10 (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s10957-024-02513-3 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:joptap:v:203:y:2024:i:3:d:10.1007_s10957-024-02513-3
Ordering information: This journal article can be ordered from
http://www.springer. ... cs/journal/10957/PS2
DOI: 10.1007/s10957-024-02513-3
Access Statistics for this article
Journal of Optimization Theory and Applications is currently edited by Franco Giannessi and David G. Hull
More articles in Journal of Optimization Theory and Applications from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().