Economics at your fingertips  

Should We Adjust for the Test for Pre-trends in Difference-in-Difference Designs?

Jonathan Roth

Papers from

Abstract: The common practice in difference-in-difference (DiD) designs is to check for parallel trends prior to treatment assignment, yet typical estimation and inference does not account for the fact that this test has occurred. I analyze the properties of the traditional DiD estimator conditional on having passed (i.e. not rejected) the test for parallel pre-trends. When the DiD design is valid and the test for pre-trends confirms it, the typical DiD estimator is unbiased, but traditional standard errors are overly conservative. Additionally, there exists an alternative unbiased estimator that is more efficient than the traditional DiD estimator under parallel trends. However, when in population there is a non-zero pre-trend but we fail to reject the hypothesis of parallel pre-trends, the DiD estimator is generally biased relative to the population DiD coefficient. Moreover, if the trend is monotone, then under reasonable assumptions the bias from conditioning exacerbates the bias relative to the true treatment effect. I propose new estimation and inference procedures that account for the test for parallel trends, and compare their performance to that of the traditional estimator in a Monte Carlo simulation.

New Economics Papers: this item is included in nep-ecm
Date: 2018-04, Revised 2018-05
References: View references in EconPapers View complete reference list from CitEc
Citations Track citations by RSS feed

Downloads: (external link) Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link:

Access Statistics for this paper

More papers in Papers from
Bibliographic data for series maintained by arXiv administrators ().

Page updated 2018-06-15
Handle: RePEc:arx:papers:1804.01208