Revolutionizing Estimation and Inference for Program Evaluation Using Bayesian Methods
Lauren Vollmer,
Mariel Finucane and
Randall Brown
Evaluation Review, 2020, vol. 44, issue 4, 295-324
Abstract:
Background: Policy makers seek to replace the “thumbs up–thumbs down†of conventional hypothesis testing with statements about the probability that program effects on key outcomes exceed policy-relevant thresholds. Objective: We develop a Bayesian model that addresses the shortcomings of a typical frequentist approach to estimating the effects of the Comprehensive Primary Care (CPC) initiative, a Centers for Medicare and Medicaid Services demonstration. We compare findings from the two approaches to illustrate the relative merits of introducing additional assumptions through Bayesian methods. Research design: We apply Bayesian and frequentist methods to estimate the effects of CPC on total Medicare expenditures per beneficiary per month for Medicare beneficiaries attributed to participating practices. Under both paradigms, we estimated program effects using difference-in-differences regressions comparing the change in Medicare expenditures between baseline and follow-up for Medicare patients attributed to 497 primary care practices participating in CPC to Medicare patients attributed to 908 propensity score-matched comparison practices. Results: Results from the Bayesian and frequentist models are comparable for the overall sample, but in regional subsamples, the Bayesian model produces more precise etimates that exhibit less variation over time. The Bayesian results also permit probabilistic inference about the magnitudes of effects, offering policy makers the ability to draw conclusions about practically meaningful thresholds. Conclusions: Carefully developed Bayesian models can enhance precision and plausibility and offer a more nuanced understanding of where and when program effects occur, without imposing undue assumptions. At the same time, these methods frame conclusions in flexible, intuitive terms that respond directly to policy makers’ needs.
Keywords: economic evaluation; design and evaluation of programs and policies; quasi-experimental design; methodology (if appropriate); Bayesian; primary care (search for similar items in EconPapers)
Date: 2020
References: Add references at CitEc
Citations:
Downloads: (external link)
https://journals.sagepub.com/doi/10.1177/0193841X18815817 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:sae:evarev:v:44:y:2020:i:4:p:295-324
DOI: 10.1177/0193841X18815817
Access Statistics for this article
More articles in Evaluation Review
Bibliographic data for series maintained by SAGE Publications ().