EconPapers    
Economics at your fingertips  
 

Debiasing In-Sample Policy Performance for Small-Data, Large-Scale Optimization

Vishal Gupta (), Michael Huang () and Paat Rusmevichientong ()
Additional contact information
Vishal Gupta: Data Science and Operations, Marshall School of Business, University of Southern California, Los Angles, California 90089
Michael Huang: Data Science and Operations, Marshall School of Business, University of Southern California, Los Angles, California 90089
Paat Rusmevichientong: Data Science and Operations, Marshall School of Business, University of Southern California, Los Angles, California 90089

Operations Research, 2024, vol. 72, issue 2, 848-870

Abstract: Motivated by the poor performance of cross-validation in settings where data are scarce, we propose a novel estimator of the out-of-sample performance of a policy in data-driven optimization. Our approach exploits the optimization problem’s sensitivity analysis to estimate the gradient of the optimal objective value with respect to the amount of noise in the data and uses the estimated gradient to debias the policy’s in-sample performance. Unlike cross-validation techniques, our approach avoids sacrificing data for a test set and uses all data when training and hence is well suited to settings where data are scarce. We prove bounds on the bias and variance of our estimator for optimization problems with uncertain linear objectives but known, potentially nonconvex, feasible regions. For more specialized optimization problems where the feasible region is “weakly coupled” in a certain sense, we prove stronger results. Specifically, we provide explicit high-probability bounds on the error of our estimator that hold uniformly over a policy class and depends on the problem’s dimension and policy class’s complexity. Our bounds show that under mild conditions, the error of our estimator vanishes as the dimension of the optimization problem grows, even if the amount of available data remains small and constant. Said differently, we prove our estimator performs well in the small-data, large-scale regime. Finally, we numerically compare our proposed method to state-of-the-art approaches through a case-study on dispatching emergency medical response services using real data. Our method provides more accurate estimates of out-of-sample performance and learns better-performing policies.

Keywords: Optimization; small-data; large-scale; data-driven optimization; large-scale regime; cross-validation; end-to-end optimization (search for similar items in EconPapers)
Date: 2024
References: Add references at CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/opre.2022.2377 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:72:y:2024:i:2:p:848-870

Access Statistics for this article

More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:oropre:v:72:y:2024:i:2:p:848-870