EconPapers    
Economics at your fingertips  
 

What Is the Value Added by Using Causal Machine Learning Methods in a Welfare Experiment Evaluation?

Anthony Strittmatter ()

Papers from arXiv.org

Abstract: Recent studies have proposed causal machine learning (CML) methods to estimate conditional average treatment effects (CATEs). In this study, I investigate whether CML methods add value compared to conventional CATE estimators by re-evaluating Connecticut's Jobs First welfare experiment. This experiment entails a mix of positive and negative work incentives. Previous studies show that it is hard to tackle the effect heterogeneity of Jobs First by means of CATEs. I report evidence that CML methods can provide support for the theoretical labor supply predictions. Furthermore, I document reasons why some conventional CATE estimators fail and discuss the limitations of CML methods.

Date: 2018-12, Revised 2019-03
New Economics Papers: this item is included in nep-big, nep-cmp and nep-exp
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (8) Track citations by RSS feed

Downloads: (external link)
http://arxiv.org/pdf/1812.06533 Latest version (application/pdf)

Related works:
Working Paper: What Is the Value Added by Using Causal Machine Learning Methods in a Welfare Experiment Evaluation? (2019) Downloads
Working Paper: What is the Value Added by using Causal Machine Learning Methods in a Welfare Experiment Evaluation? (2019) Downloads
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:1812.06533

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2020-08-08
Handle: RePEc:arx:papers:1812.06533