Calibration of Distributionally Robust Empirical Optimization Models
Jun‐ya Gotoh (),
Michael Jong Kim () and
Andrew E. B. Lim ()
Additional contact information
Jun‐ya Gotoh: Department of Industrial and Systems Engineering, Chuo University, Tokyo 112-8551, Japan
Michael Jong Kim: UBC Sauder School of Business, University of British Columbia, Vancouver, British Columbia V6T 1Z2, Canada
Andrew E. B. Lim: Department of Analytics and Operations and Department of Finance, NUS Business School, National University of Singapore, Singapore 119245; Institute for Operations Research and Analytics, National University of Singapore, Singapore 117602
Operations Research, 2021, vol. 69, issue 5, 1630-1650
Abstract:
We study the out-of-sample properties of robust empirical optimization problems with smooth φ -divergence penalties and smooth concave objective functions, and we develop a theory for data-driven calibration of the nonnegative “robustness parameter” δ that controls the size of the deviations from the nominal model. Building on the intuition that robust optimization reduces the sensitivity of the expected reward to errors in the model by controlling the spread of the reward distribution, we show that the first-order benefit of “little bit of robustness” (i.e., δ small, positive) is a significant reduction in the variance of the out-of-sample reward, whereas the corresponding impact on the mean is almost an order of magnitude smaller. One implication is that substantial variance (sensitivity) reduction is possible at little cost if the robustness parameter is properly calibrated. To this end, we introduce the notion of a robust mean-variance frontier to select the robustness parameter and show that it can be approximated using resampling methods such as the bootstrap. Our examples show that robust solutions resulting from “open-loop” calibration methods (e.g., selecting a 90% confidence level regardless of the data and objective function) can be very conservative out of sample, whereas those corresponding to the robustness parameter that optimizes an estimate of the out-of-sample expected reward (e.g., via the bootstrap) with no regard for the variance are often insufficiently robust.
Keywords: decision analysis: risk; sensitivity; probability: stochastic model applications; programming: nonlinear; stochastic; statistics: data; sampling; Optimization; distributionally robust optimization; calibration; worst-case sensitivity; variance reduction (search for similar items in EconPapers)
Date: 2021
References: Add references at CitEc
Citations:
Downloads: (external link)
http://dx.doi.org/10.1287/opre.2020.2041 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:69:y:2021:i:5:p:1630-1650
Access Statistics for this article
More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().