Stochastic Localization Methods for Convex Discrete Optimization via Simulation
Haixiang Zhang (),
Zeyu Zheng () and
Javad Lavaei ()
Additional contact information
Haixiang Zhang: Department of Mathematics, University of California, Berkeley, California 94720
Zeyu Zheng: Department of Industrial Engineering and Operations Research, University of California, Berkeley, California 94720
Javad Lavaei: Department of Industrial Engineering and Operations Research, University of California, Berkeley, California 94720
Operations Research, 2025, vol. 73, issue 2, 927-948
Abstract:
We develop and analyze a set of new sequential simulation-optimization algorithms for large-scale multidimensional discrete optimization via simulation problems with a convexity structure. The “large-scale” notion refers to that the discrete decision variable has a large number of values from which to choose on each dimension of the decision variable. The proposed algorithms are targeted to identify a solution that is close to the optimal solution given any precision level with any given probability. To achieve this target, utilizing the convexity structure, our algorithm design does not need to scan all the choices of the decision variable, but instead sequentially draws a subset of choices of the decision variable and uses them to “localize” potentially near-optimal solutions to an adaptively shrinking region. To show the power of the proposed methods based on the localization idea, we first consider one-dimensional large-scale problems. We develop the shrinking uniform sampling algorithm, which is proved to achieve the target with an optimal expected simulation cost under an asymptotic criterion. For multidimensional problems, we combine the idea of localization with subgradient information and propose a framework to design stochastic cutting-plane methods, whose expected simulation costs have a low dependence on the scale and the dimension of the problems. In addition, utilizing the discrete nature of the problems, we propose a stochastic dimension-reduction algorithm, which does not require prior information about the Lipschitz constant of the objective function, and its simulation costs are upper bounded by a value that is independent of the Lipschitz constant. We implement the proposed algorithms on synthetic problems and queueing simulation-optimization problems and demonstrate better performances compared with benchmark methods especially for large-scale examples.
Keywords: Simulation; discrete optimization via simulation; convex optimization; shrinking uniform sampling algorithm; best achievable performance; stochastic cutting-plane methods; dimension reduction method (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://dx.doi.org/10.1287/opre.2022.0030 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:73:y:2025:i:2:p:927-948
Access Statistics for this article
More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().