Fair Effect Attribution in Parallel Online Experiments
Alexander Buchholz,
Vito Bellini,
Giuseppe Di Benedetto,
Yannik Stein,
Matteo Ruffini and
Fabian Moerchen
Papers from arXiv.org
Abstract:
A/B tests serve the purpose of reliably identifying the effect of changes introduced in online services. It is common for online platforms to run a large number of simultaneous experiments by splitting incoming user traffic randomly in treatment and control groups. Despite a perfect randomization between different groups, simultaneous experiments can interact with each other and create a negative impact on average population outcomes such as engagement metrics. These are measured globally and monitored to protect overall user experience. Therefore, it is crucial to measure these interaction effects and attribute their overall impact in a fair way to the respective experimenters. We suggest an approach to measure and disentangle the effect of simultaneous experiments by providing a cost sharing approach based on Shapley values. We also provide a counterfactual perspective, that predicts shared impact based on conditional average treatment effects making use of causal inference techniques. We illustrate our approach in real world and synthetic data experiments.
Date: 2022-10
New Economics Papers: this item is included in nep-exp, nep-gth and nep-pay
References: View references in EconPapers View complete reference list from CitEc
Citations:
Published in WWW '22: Companion Proceedings of the Web Conference 2022
Downloads: (external link)
http://arxiv.org/pdf/2210.08338 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2210.08338
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().