Recent Theoretical Advances in Non-Convex Optimization
Marina Danilova,
Pavel Dvurechensky,
Alexander Gasnikov (),
Eduard Gorbunov,
Sergey Guminov,
Dmitry Kamzolov and
Innokentiy Shibaev
Additional contact information
Marina Danilova: Institute of Control Sciences RAS
Pavel Dvurechensky: Weierstrass Institute for Applied Analysis and Stochastics
Alexander Gasnikov: Moscow Institute of Physics and Technology
Eduard Gorbunov: Moscow Institute of Physics and Technology
Sergey Guminov: HSE University
Dmitry Kamzolov: Moscow Institute of Physics and Technology
Innokentiy Shibaev: Moscow Institute of Physics and Technology
A chapter in High-Dimensional Optimization and Probability, 2022, pp 79-163 from Springer
Abstract:
Abstract Motivated by recent increased interest in optimization algorithms for non-convex optimization in application to training deep neural networks and other optimization problems in data analysis, we give an overview of recent theoretical results on global performance guarantees of optimization algorithms for non-convex optimization. We start with classical arguments showing that general non-convex problems could not be solved efficiently in a reasonable time. Then we give a list of problems that can be solved efficiently to find the global minimizer by exploiting the structure of the problem as much as it is possible. Another way to deal with non-convexity is to relax the goal from finding the global minimum to finding a stationary point or a local minimum. For this setting, we first present known results for the convergence rates of deterministic first-order methods, which are then followed by a general theoretical analysis of optimal stochastic and randomized gradient schemes, and an overview of the stochastic first-order methods. After that, we discuss quite general classes of non-convex problems, such as minimization of α-weakly quasi-convex functions and functions that satisfy Polyak–Łojasiewicz condition, which still allow obtaining theoretical convergence guarantees of first-order methods. Then we consider higher-order and zeroth-order/derivative-free methods and their convergence rates for non-convex optimization problems.
Date: 2022
References: Add references at CitEc
Citations: View citations in EconPapers (1)
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:spochp:978-3-031-00832-0_3
Ordering information: This item can be ordered from
http://www.springer.com/9783031008320
DOI: 10.1007/978-3-031-00832-0_3
Access Statistics for this chapter
More chapters in Springer Optimization and Its Applications from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().