Convergence of the Surrogate Lagrangian Relaxation Method
Mikhail A. Bragin (),
Peter B. Luh,
Joseph H. Yan,
Nanpeng Yu and
Gary A. Stern
Additional contact information
Mikhail A. Bragin: University of Connecticut
Peter B. Luh: University of Connecticut
Joseph H. Yan: Southern California Edison
Nanpeng Yu: Southern California Edison
Gary A. Stern: Southern California Edison
Journal of Optimization Theory and Applications, 2015, vol. 164, issue 1, No 9, 173-201
Abstract:
Abstract Studies have shown that the surrogate subgradient method, to optimize non-smooth dual functions within the Lagrangian relaxation framework, can lead to significant computational improvements as compared to the subgradient method. The key idea is to obtain surrogate subgradient directions that form acute angles toward the optimal multipliers without fully minimizing the relaxed problem. The major difficulty of the method is its convergence, since the convergence proof and the practical implementation require the knowledge of the optimal dual value. Adaptive estimations of the optimal dual value may lead to divergence and the loss of the lower bound property for surrogate dual values. The main contribution of this paper is on the development of the surrogate Lagrangian relaxation method and its convergence proof to the optimal multipliers, without the knowledge of the optimal dual value and without fully optimizing the relaxed problem. Moreover, for practical implementations, a stepsizing formula that guarantees convergence without requiring the optimal dual value has been constructively developed. The key idea is to select stepsizes in a way that distances between Lagrange multipliers at consecutive iterations decrease, and as a result, Lagrange multipliers converge to a unique limit. At the same time, stepsizes are kept sufficiently large so that the algorithm does not terminate prematurely. At convergence, the lower-bound property of the surrogate dual is guaranteed. Testing results demonstrate that non-smooth dual functions can be efficiently optimized, and the new method leads to faster convergence as compared to other methods available for optimizing non-smooth dual functions, namely, the simple subgradient method, the subgradient-level method, and the incremental subgradient method.
Keywords: Non-smooth optimization; Subgradient methods; Surrogate subgradient method; Lagrangian relaxation; Mixed-integer programming; 90C25 (search for similar items in EconPapers)
Date: 2015
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (6)
Downloads: (external link)
http://link.springer.com/10.1007/s10957-014-0561-3 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:joptap:v:164:y:2015:i:1:d:10.1007_s10957-014-0561-3
Ordering information: This journal article can be ordered from
http://www.springer. ... cs/journal/10957/PS2
DOI: 10.1007/s10957-014-0561-3
Access Statistics for this article
Journal of Optimization Theory and Applications is currently edited by Franco Giannessi and David G. Hull
More articles in Journal of Optimization Theory and Applications from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().