EconPapers    
Economics at your fingertips  
 

Adaptive Restart of the Optimized Gradient Method for Convex Optimization

Donghwan Kim () and Jeffrey A. Fessler ()
Additional contact information
Donghwan Kim: University of Michigan
Jeffrey A. Fessler: University of Michigan

Journal of Optimization Theory and Applications, 2018, vol. 178, issue 1, No 12, 240-263

Abstract: Abstract First-order methods with momentum, such as Nesterov’s fast gradient method, are very useful for convex optimization problems, but can exhibit undesirable oscillations yielding slow convergence rates for some applications. An adaptive restarting scheme can improve the convergence rate of the fast gradient method, when the parameter of a strongly convex cost function is unknown or when the iterates of the algorithm enter a locally strongly convex region. Recently, we introduced the optimized gradient method, a first-order algorithm that has an inexpensive per-iteration computational cost similar to that of the fast gradient method, yet has a worst-case cost function rate that is twice faster than that of the fast gradient method and that is optimal for large-dimensional smooth convex problems. Building upon the success of accelerating the fast gradient method using adaptive restart, this paper investigates similar heuristic acceleration of the optimized gradient method. We first derive a new first-order method that resembles the optimized gradient method for strongly convex quadratic problems with known function parameters, yielding a linear convergence rate that is faster than that of the analogous version of the fast gradient method. We then provide a heuristic analysis and numerical experiments that illustrate that adaptive restart can accelerate the convergence of the optimized gradient method. Numerical results also illustrate that adaptive restart is helpful for a proximal version of the optimized gradient method for nonsmooth composite convex functions.

Keywords: Convex optimization; First-order methods; Accelerated gradient methods; Optimized gradient method; Restarting; 80M50; 90C06; 90C25 (search for similar items in EconPapers)
Date: 2018
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
http://link.springer.com/10.1007/s10957-018-1287-4 Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:joptap:v:178:y:2018:i:1:d:10.1007_s10957-018-1287-4

Ordering information: This journal article can be ordered from
http://www.springer. ... cs/journal/10957/PS2

DOI: 10.1007/s10957-018-1287-4

Access Statistics for this article

Journal of Optimization Theory and Applications is currently edited by Franco Giannessi and David G. Hull

More articles in Journal of Optimization Theory and Applications from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-20
Handle: RePEc:spr:joptap:v:178:y:2018:i:1:d:10.1007_s10957-018-1287-4