A Comparison of Methods for Treatment Assignment with an Application to Playlist Generation
Carlos Fernández-Loría (),
Foster Provost (),
Jesse Anderton (),
Benjamin Carterette () and
Praveen Chandar ()
Additional contact information
Carlos Fernández-Loría: Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong
Foster Provost: New York University, New York, New York 10012
Jesse Anderton: Spotify, New York, New York 10007
Benjamin Carterette: Spotify, New York, New York 10007
Praveen Chandar: Spotify, New York, New York 10007
Information Systems Research, 2023, vol. 34, issue 2, 786-803
Abstract:
This study presents a systematic comparison of methods for individual treatment assignment, a general problem that arises in many applications and that has received significant attention from economists, computer scientists, and social scientists. We group the various methods proposed in the literature into three general classes of algorithms (or metalearners): learning models to predict outcomes (the O-learner), learning models to predict causal effects (the E-learner), and learning models to predict optimal treatment assignments (the A-learner). We compare the metalearners in terms of (1) their level of generality and (2) the objective function they use to learn models from data; we then discuss the implications that these characteristics have for modeling and decision making. Notably, we demonstrate analytically and empirically that optimizing for the prediction of outcomes or causal effects is not the same as optimizing for treatment assignments, suggesting that, in general, the A-learner should lead to better treatment assignments than the other metalearners. We demonstrate the practical implications of our findings in the context of choosing, for each user, the best algorithm for playlist generation in order to optimize engagement. This is the first comparison of the three different metalearners on a real-world application at scale (based on more than half a billion individual treatment assignments). In addition to supporting our analytical findings, the results show how large A/B tests can provide substantial value for learning treatment-assignment policies, rather than simply for choosing the variant that performs best on average.
Keywords: treatment assignment; treatment effects; predictive modeling (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
http://dx.doi.org/10.1287/isre.2022.1149 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:orisre:v:34:y:2023:i:2:p:786-803
Access Statistics for this article
More articles in Information Systems Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().