Double/debiased machine learning for difference-in-differences models
Neng-Chieh Chang
The Econometrics Journal, 2020, vol. 23, issue 2, 177-191
Abstract:
SummaryThis paper provides an orthogonal extension of the semiparametric difference-in-differences estimator proposed in earlier literature. The proposed estimator enjoys the so-called Neyman orthogonality (Chernozhukov et al., 2018), and thus it allows researchers to flexibly use a rich set of machine learning methods in the first-step estimation. It is particularly useful when researchers confront a high-dimensional data set in which the number of potential control variables is larger than the sample size and the conventional nonparametric estimation methods, such as kernel and sieve estimators, do not apply. I apply this orthogonal difference-in-differences estimator to evaluate the effect of tariff reduction on corruption. The empirical results show that tariff reduction decreases corruption in large magnitude.
Keywords: Difference-in-differences; high-dimensional data; causal inference; machine learning (search for similar items in EconPapers)
Date: 2020
References: Add references at CitEc
Citations: View citations in EconPapers (24)
Downloads: (external link)
http://hdl.handle.net/10.1093/ectj/utaa001 (application/pdf)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:oup:emjrnl:v:23:y:2020:i:2:p:177-191.
Access Statistics for this article
The Econometrics Journal is currently edited by Jaap Abbring
More articles in The Econometrics Journal from Royal Economic Society Contact information at EDIRC.
Bibliographic data for series maintained by Oxford University Press ().