Linear convergence analysis of the use of gradient projection methods on total variation problems
Pengwen Chen () and
Changfeng Gui ()
Computational Optimization and Applications, 2013, vol. 54, issue 2, 283-315
Abstract:
Optimization problems using total variation frequently appear in image analysis models, in which the sharp edges of images are preserved. Direct gradient descent methods usually yield very slow convergence when used for such optimization problems. Recently, many duality-based gradient projection methods have been proposed to accelerate the speed of convergence. In this dual formulation, the cost function of the optimization problem is singular, and the constraint set is not a polyhedral set. In this paper, we establish two inequalities related to projected gradients and show that, under some non-degeneracy conditions, the rate of convergence is linear. Copyright Springer Science+Business Media, LLC 2013
Keywords: Total variation; Gradient projection methods; Non-degeneracy conditions; Linear convergence; Projected gradients (search for similar items in EconPapers)
Date: 2013
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://hdl.handle.net/10.1007/s10589-011-9412-4 (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:coopap:v:54:y:2013:i:2:p:283-315
Ordering information: This journal article can be ordered from
http://www.springer.com/math/journal/10589
DOI: 10.1007/s10589-011-9412-4
Access Statistics for this article
Computational Optimization and Applications is currently edited by William W. Hager
More articles in Computational Optimization and Applications from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().