Inference with difference-in-differences with a small number of groups: a review, simulation study and empirical application using SHARE data
Slawa Rokicki,
Jessica Cohen,
Günther Fink,
Joshua A. Salomon and
Mary Beth Landrum
Additional contact information
Jessica Cohen: Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, MA.
Joshua A. Salomon: Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, MA.
Mary Beth Landrum: Department of Health Care Policy, Harvard Medical School, Boston, MA
No 201802, Working Papers from Geary Institute, University College Dublin
Abstract:
Background - Difference-in-differences (DID) estimation has become increasingly popular as an approach to evaluate the effect of a group-level policy on individual-level outcomes. Several statistical methodologies have been proposed to correct for the within-group correlation of model errors resulting from the clustering of data. Little is known about how well these corrections perform with the often small number of groups observed in health research using longitudinal data. Methods - First, we review the most commonly used modelling solutions in DID estimation for panel data, including generalized estimating equations (GEE), permutation tests, clustered standard errors (CSE), wild cluster bootstrapping, and aggregation. Second, we compare the empirical coverage rates and power of these methods using a Monte Carlo simulation study in scenarios in which we vary the degree of error correlation, the group size balance, and the proportion of treated groups. Third, we provide an empirical example using the Survey of Health, Ageing and Retirement in Europe (SHARE). Results - When the number of groups is small, CSE are systematically biased downwards in scenarios when data are unbalanced or when there is a low proportion of treated groups. This can result in over-rejection of the null even when data are composed of up to 50 groups. Aggregation, permutation tests, bias-adjusted GEE and wild cluster bootstrap produce coverage rates close to the nominal rate for almost all scenarios, though GEE may suffer from low power. Conclusions - In DID estimation with a small number of groups, analysis using aggregation, permutation tests, wild cluster bootstrap, or bias-adjusted GEE is recommended.
Keywords: difference-in-differences; clustered standard errors; inference; Monte Carlo simulation; GEE (search for similar items in EconPapers)
Pages: 36 pages
Date: 2018-01-16
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (8)
Downloads: (external link)
http://www.ucd.ie/geary/static/publications/workingpapers/gearywp201802.pdf First version, 2018 (application/pdf)
Related works:
Working Paper: Inference with difference-in-differences with a small number of groups: a review, simulation study and empirical application using SHARE data (2018) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:ucd:wpaper:201802
Access Statistics for this paper
More papers in Working Papers from Geary Institute, University College Dublin Contact information at EDIRC.
Bibliographic data for series maintained by Geary Tech ().