Robust expected improvement for Bayesian optimization
Ryan B. Christianson and
Robert B. Gramacy
IISE Transactions, 2024, vol. 56, issue 12, 1294-1306
Abstract:
Bayesian Optimization (BO) links Gaussian Process (GP) surrogates with sequential design toward optimizing expensive-to-evaluate black-box functions. Example design heuristics, or so-called acquisition functions, like expected improvement, balance exploration and exploitation to furnish global solutions under stringent evaluation budgets. However, they fall short when solving for robust optima, meaning a preference for solutions in a wider domain of attraction. Robust solutions are useful when inputs are imprecisely specified, or where a series of solutions is desired. A common mathematical programming technique in such settings involves an adversarial objective, biasing a local solver away from “sharp” troughs. Here we propose a surrogate modeling and active learning technique called robust expected improvement that ports adversarial methodology into the BO/GP framework. After describing the methods, we illustrate and draw comparisons to several competitors on benchmark synthetic exercises and real problems of varying complexity.
Date: 2024
References: Add references at CitEc
Citations:
Downloads: (external link)
http://hdl.handle.net/10.1080/24725854.2023.2275166 (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:taf:uiiexx:v:56:y:2024:i:12:p:1294-1306
Ordering information: This journal article can be ordered from
http://www.tandfonline.com/pricing/journal/uiie20
DOI: 10.1080/24725854.2023.2275166
Access Statistics for this article
IISE Transactions is currently edited by Jianjun Shi
More articles in IISE Transactions from Taylor & Francis Journals
Bibliographic data for series maintained by Chris Longhurst ().