EconPapers    
Economics at your fingertips  
 

Rapid Discrete Optimization via Simulation with Gaussian Markov Random Fields

Mark Semelhago (), Barry L. Nelson (), Eunhye Song () and Andreas Wächter ()
Additional contact information
Mark Semelhago: Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, Illinois 60208
Barry L. Nelson: Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, Illinois 60208
Eunhye Song: Department of Industrial and Manufacturing Engineering, The Pennsylvania State University, University Park, Pennsylvania 16802
Andreas Wächter: Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, Illinois 60208

INFORMS Journal on Computing, 2021, vol. 33, issue 3, 915-930

Abstract: Inference-based optimization via simulation, which substitutes Gaussian process (GP) learning for the structural properties exploited in mathematical programming, is a powerful paradigm that has been shown to be remarkably effective in problems of modest feasible-region size and decision-variable dimension. The limitation to “modest” problems is a result of the computational overhead and numerical challenges encountered in computing the GP conditional (posterior) distribution on each iteration. In this paper, we substantially expand the size of discrete-decision-variable optimization-via-simulation problems that can be attacked in this way by exploiting a particular GP—discrete Gaussian Markov random fields—and carefully tailored computational methods. The result is the rapid Gaussian Markov Improvement Algorithm (rGMIA), an algorithm that delivers both a global convergence guarantee and finite-sample optimality-gap inference for significantly larger problems. Between infrequent evaluations of the global conditional distribution, rGMIA applies the full power of GP learning to rapidly search smaller sets of promising feasible solutions that need not be spatially close. We carefully document the computational savings via complexity analysis and an extensive empirical study. Summary of Contribution: The broad topic of the paper is optimization via simulation, which means optimizing some performance measure of a system that may only be estimated by executing a stochastic, discrete-event simulation. Stochastic simulation is a core topic and method of operations research. The focus of this paper is on significantly speeding-up the computations underlying an existing method that is based on Gaussian process learning, where the underlying Gaussian process is a discrete Gaussian Markov Random Field. This speed-up is accomplished by employing smart computational linear algebra, state-of-the-art algorithms, and a careful divide-and-conquer evaluation strategy. Problems of significantly greater size than any other existing algorithm with similar guarantees can solve are solved as illustrations.

Keywords: design of experiments; efficiency; statistical analysis (search for similar items in EconPapers)
Date: 2021
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/ijoc.2020.0971 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:orijoc:v:33:y:2021:i:3:p:915-930

Access Statistics for this article

More articles in INFORMS Journal on Computing from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:orijoc:v:33:y:2021:i:3:p:915-930