Distributionally Robust Batch Contextual Bandits
Nian Si (),
Fan Zhang (),
Zhengyuan Zhou () and
Jose Blanchet ()
Additional contact information
Nian Si: Department of Management Science & Engineering, Stanford University, Stanford, California 94305
Fan Zhang: Department of Management Science & Engineering, Stanford University, Stanford, California 94305
Zhengyuan Zhou: Stern School of Business, New York University, New York, New York 10012
Jose Blanchet: Department of Management Science & Engineering, Stanford University, Stanford, California 94305
Management Science, 2023, vol. 69, issue 10, 5772-5793
Abstract:
Policy learning using historical observational data are an important problem that has widespread applications. Examples include selecting offers, prices, or advertisements for consumers; choosing bids in contextual first-price auctions; and selecting medication based on patients’ characteristics. However, existing literature rests on the crucial assumption that the future environment where the learned policy will be deployed is the same as the past environment that has generated the data: an assumption that is often false or too coarse an approximation. In this paper, we lift this assumption and aim to learn a distributionally robust policy with incomplete observational data. We first present a policy evaluation procedure that allows us to assess how well the policy does under worst-case environment shift. We then establish a central limit theorem type guarantee for this proposed policy evaluation scheme. Leveraging this evaluation scheme, we further propose a novel learning algorithm that is able to learn a policy that is robust to adversarial perturbations and unknown covariate shifts with a performance guarantee based on the theory of uniform convergence. Finally, we empirically test the effectiveness of our proposed algorithm in synthetic datasets and demonstrate that it provides the robustness that is missing using standard policy learning algorithms. We conclude the paper by providing a comprehensive application of our methods in the context of a real-world voting data set.
Keywords: distributional robustness; policy learning; personalization; contextual bandits (search for similar items in EconPapers)
Date: 2023
References: Add references at CitEc
Citations:
Downloads: (external link)
http://dx.doi.org/10.1287/mnsc.2023.4678 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:ormnsc:v:69:y:2023:i:10:p:5772-5793
Access Statistics for this article
More articles in Management Science from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().