Improving precision of A/B experiments using trigger intensity
Tanmoy Das,
Dohyeon Lee and
Arnab Sinha
Papers from arXiv.org
Abstract:
In industry, online randomized controlled experiment (a.k.a. A/B experiment) is a standard approach to measure the impact of a causal change. These experiments have small treatment effect to reduce the potential blast radius. As a result, these experiments often lack statistical significance due to low signal-to-noise ratio. A standard approach for improving the precision (or reducing the standard error) focuses only on the trigger observations, where the output of the treatment and the control model are different. Although evaluation with full information about trigger observations (full knowledge) improves the precision, detecting all such trigger observations is a costly affair. In this paper, we propose a sampling based evaluation method (partial knowledge) to reduce this cost. The randomness of sampling introduces bias in the estimated outcome. We theoretically analyze this bias and show that the bias is inversely proportional to the number of observations used for sampling. We also compare the proposed evaluation methods using simulation and empirical data. In simulation, bias in evaluation with partial knowledge effectively reduces to zero when a limited number of observations (
Date: 2024-11, Revised 2025-05
New Economics Papers: this item is included in nep-ecm and nep-exp
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2411.03530 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2411.03530
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().