On the convergence order of value function relaxations used in decomposition-based global optimization of nonconvex stochastic programs
Dillard Robertson,
Pengfei Cheng and
Joseph K. Scott ()
Additional contact information
Dillard Robertson: Georgia Institute of Technology
Pengfei Cheng: Georgia Institute of Technology
Joseph K. Scott: Georgia Institute of Technology
Journal of Global Optimization, 2025, vol. 91, issue 4, No 1, 742 pages
Abstract:
Abstract This paper analyzes the convergence rate of decomposition algorithms for globally solving nonconvex stochastic programming problems. We focus on two recent algorithms, termed CZ (Cao and Zavala in J Glob Optim 75:393–416, 2019) and LG (Li and Grossmann in J Glob Optim 75:247–272, 2019), that guarantee global optimality while achieving a favorable decomposed scaling with respect to the number of scenarios. Both methods project the problem into the space of first-stage decisions and apply a spatial-B&B search in this reduced space. Consequently, we observe that they are subject to the results of prior studies on the efficiency of general spatial-B&B algorithms. Such studies have concluded that, to avoid very slow convergence due to the cluster problem, it is necessary (but not sufficient) for the B&B lower bounding problems to have a sufficiently high Hausdorff convergence order. We apply this concept to the CZ and LG decomposition algorithms by first arguing that their lower bounding procedures can be interpreted as defining relaxations in the reduced space of first-stage decisions, and then analyzing the Hausdorff convergence of these relaxations in detail. The results are found to depend strongly on the regularity of the recourse optimal value functions. The relaxations used by CZ are found to be first-order convergent or less, while second order is generally necessary to avoid clustering. In contrast, the relaxations used by LG achieve the highest order possible within the decomposition framework we consider, which is second order when the value functions are smooth, but first order or less otherwise. Unfortunately, these functions are only guaranteed to be lower semi-continuous under standard assumptions. This alludes to a larger limitation of the projection-based decomposition approach, which is discussed at length.
Keywords: Stochastic programming; Decomposition; Global optimization; Nonconvex optimization; Convergence rate (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s10898-024-01458-1 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:jglopt:v:91:y:2025:i:4:d:10.1007_s10898-024-01458-1
Ordering information: This journal article can be ordered from
http://www.springer. ... search/journal/10898
DOI: 10.1007/s10898-024-01458-1
Access Statistics for this article
Journal of Global Optimization is currently edited by Sergiy Butenko
More articles in Journal of Global Optimization from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().