Off-Policy Evaluation via Adaptive Weighting with Data from Contextual Bandits
Ruohan Zhan,
Vitor Hadad,
David A. Hirshberg and
Susan Athey
Papers from arXiv.org
Abstract:
It has become increasingly common for data to be collected adaptively, for example using contextual bandits. Historical data of this type can be used to evaluate other treatment assignment policies to guide future innovation or experiments. However, policy evaluation is challenging if the target policy differs from the one used to collect data, and popular estimators, including doubly robust (DR) estimators, can be plagued by bias, excessive variance, or both. In particular, when the pattern of treatment assignment in the collected data looks little like the pattern generated by the policy to be evaluated, the importance weights used in DR estimators explode, leading to excessive variance. In this paper, we improve the DR estimator by adaptively weighting observations to control its variance. We show that a t-statistic based on our improved estimator is asymptotically normal under certain conditions, allowing us to form confidence intervals and test hypotheses. Using synthetic data and public benchmarks, we provide empirical evidence for our estimator's improved accuracy and inferential properties relative to existing alternatives.
Date: 2021-06, Revised 2021-06
References: Add references at CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2106.02029 Latest version (application/pdf)
Related works:
Working Paper: Off-Policy Evaluation via Adaptive Weighting with Data from Contextual Bandits (2021) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2106.02029
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().