EconPapers    
Economics at your fingertips  
 

Multiple hypothesis testing in experimental economics

John List, Azeem Shaikh and Yang Xu ()
Additional contact information
Yang Xu: University of Chicago

Experimental Economics, 2019, vol. 22, issue 4, No 1, 773-793

Abstract: Abstract The analysis of data from experiments in economics routinely involves testing multiple null hypotheses simultaneously. These different null hypotheses arise naturally in this setting for at least three different reasons: when there are multiple outcomes of interest and it is desired to determine on which of these outcomes a treatment has an effect; when the effect of a treatment may be heterogeneous in that it varies across subgroups defined by observed characteristics and it is desired to determine for which of these subgroups a treatment has an effect; and finally when there are multiple treatments of interest and it is desired to determine which treatments have an effect relative to either the control or relative to each of the other treatments. In this paper, we provide a bootstrap-based procedure for testing these null hypotheses simultaneously using experimental data in which simple random sampling is used to assign treatment status to units. Using the general results in Romano and Wolf (Ann Stat 38:598–633, 2010), we show under weak assumptions that our procedure (1) asymptotically controls the familywise error rate—the probability of one or more false rejections—and (2) is asymptotically balanced in that the marginal probability of rejecting any true null hypothesis is approximately equal in large samples. Importantly, by incorporating information about dependence ignored in classical multiple testing procedures, such as the Bonferroni and Holm corrections, our procedure has much greater ability to detect truly false null hypotheses. In the presence of multiple treatments, we additionally show how to exploit logical restrictions across null hypotheses to further improve power. We illustrate our methodology by revisiting the study by Karlan and List (Am Econ Rev 97(5):1774–1793, 2007) of why people give to charitable causes.

Keywords: Experiments; Multiple hypothesis testing; Multiple treatments; Multiple outcomes; Multiple subgroups; Randomized controlled trial; Bootstrap; Balance (search for similar items in EconPapers)
JEL-codes: C12 C14 (search for similar items in EconPapers)
Date: 2019
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (268)

Downloads: (external link)
http://link.springer.com/10.1007/s10683-018-09597-5 Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
Working Paper: Multiple Hypothesis Testing in Experimental Economics (2016) Downloads
Working Paper: Multiple Hypothesis Testing in Experimental Economics (2016) Downloads
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:kap:expeco:v:22:y:2019:i:4:d:10.1007_s10683-018-09597-5

Ordering information: This journal article can be ordered from
http://www.springer. ... ry/journal/10683/PS2

DOI: 10.1007/s10683-018-09597-5

Access Statistics for this article

Experimental Economics is currently edited by David J. Cooper, Lata Gangadharan and Charles N. Noussair

More articles in Experimental Economics from Springer, Economic Science Association Contact information at EDIRC.
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-30
Handle: RePEc:kap:expeco:v:22:y:2019:i:4:d:10.1007_s10683-018-09597-5