EconPapers    
Economics at your fingertips  
 

A nonmonotone accelerated proximal gradient method with variable stepsize strategy for nonsmooth and nonconvex minimization problems

Hongwei Liu (), Ting Wang () and Zexian Liu ()
Additional contact information
Hongwei Liu: Xidian University
Ting Wang: Xi’an University of Posts and Telecommunications
Zexian Liu: Guizhou University

Journal of Global Optimization, 2024, vol. 89, issue 4, No 2, 863-897

Abstract: Abstract In this paper, we consider the problem that minimizing the sum of a nonsmooth function with a smooth one in the nonconvex setting, which arising in many contemporary applications such as machine learning, statistics, and signal/image processing. To solve this problem, we propose a new nonmonotone accelerated proximal gradient method with variable stepsize strategy. Note that incorporating inertial term into proximal gradient method is a simple and efficient acceleration technique, while the descent property of the proximal gradient algorithm will lost. In our algorithm, the iterates generated by inertial proximal gradient scheme are accepted when the objective function values decrease or increase appropriately; otherwise, the iteration point is generated by proximal gradient scheme, which makes the function values on a subset of iterates are decreasing. We also introduce a variable stepsize strategy, which does not need a line search or does not need to know the Lipschitz constant and makes the algorithm easy to implement. We show that the sequence of iterates generated by the algorithm converges to a critical point of the objective function. Further, under the assumption that the objective function satisfies the Kurdyka–Łojasiewicz inequality, we prove the convergence rates of the objective function values and the iterates. Moreover, numerical results on both convex and nonconvex problems are reported to demonstrate the effectiveness and superiority of the proposed method and stepsize strategy.

Keywords: Nonconvex; Nonsmooth; Accelerated proximal gradient method; Variable stepsize strategy; Kurdyka–Łojasiewicz property; Convergence; 94A12; 65K10; 94A08; 90C25 (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://link.springer.com/10.1007/s10898-024-01366-4 Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:jglopt:v:89:y:2024:i:4:d:10.1007_s10898-024-01366-4

Ordering information: This journal article can be ordered from
http://www.springer. ... search/journal/10898

DOI: 10.1007/s10898-024-01366-4

Access Statistics for this article

Journal of Global Optimization is currently edited by Sergiy Butenko

More articles in Journal of Global Optimization from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-04-12
Handle: RePEc:spr:jglopt:v:89:y:2024:i:4:d:10.1007_s10898-024-01366-4