Offline Multi-Action Policy Learning: Generalization and Optimization
Zhengyuan Zhou (),
Susan Athey and
Stefan Wager ()
Additional contact information
Zhengyuan Zhou: Stern School of Business, New York University
Stefan Wager: Graduate School of Business, Stanford University
Operations Research, 2023, vol. 71, issue 1, 148-183
Abstract:
In many settings, a decision maker wishes to learn a rule, or policy, that maps from observable characteristics of an individual to an action. Examples include selecting offers, prices, advertisements, or emails to send to consumers, choosing a bid to submit in a contextual first-price auctions, and determining which medication to prescribe to a patient. In this paper, we study the offline multi-action policy learning problem with observational data and where the policy may need to respect budget constraints or belong to a restricted policy class such as decision trees. By using the standard augmented inverse propensity weight estimator, we design and implement a policy learning algorithm that achieves asymptotically minimax-optimal regret. To the best of our knowledge, this is the first result of this type in the multi-action setup, and it provides a substantial performance improvement over the existing learning algorithms. We then consider additional computational challenges that arise in implementing our method for the case where the policy is restricted to take the form of a decision tree. We propose two different approaches: one using a mixed integer program formulation and the other using a tree-search based algorithm.
Keywords: Machine Learning and Data Science; data-driven decision making; policy learning; minimax regret; mixed integer program; heterogeneous treatment effects (search for similar items in EconPapers)
Date: 2023
References: Add references at CitEc
Citations:
Downloads: (external link)
http://dx.doi.org/10.1287/opre.2022.2271 (application/pdf)
Related works:
Working Paper: Offline Multi-Action Policy Learning: Generalization and Optimization (2018) 
Working Paper: Offline Multi-Action Policy Learning: Generalization and Optimization (2018) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:71:y:2023:i:1:p:148-183
Access Statistics for this article
More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().