EconPapers    
Economics at your fingertips  
 

Policy Learning for Optimal Dynamic Treatment Regimes with Observational Data

Shosei Sakaguchi

Papers from arXiv.org

Abstract: Public policies and medical interventions often involve dynamic treatment assignments, in which individuals receive a sequence of interventions over multiple stages. We study the statistical learning of optimal dynamic treatment regimes (DTRs) that determine the optimal treatment assignment for each individual at each stage based on their evolving history. We propose a novel, doubly robust, classification-based method for learning the optimal DTR from observational data under the sequential ignorability assumption. The method proceeds via backward induction: at each stage, it constructs and maximizes an augmented inverse probability weighting (AIPW) estimator of the policy value function to learn the optimal stage-specific policy. We show that the resulting DTR achieves an optimal convergence rate of $n^{-1/2}$ for welfare regret under mild convergence conditions on estimators of the nuisance components.

Date: 2024-03, Revised 2025-05
New Economics Papers: this item is included in nep-ecm
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://arxiv.org/pdf/2404.00221 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2404.00221

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2025-05-21
Handle: RePEc:arx:papers:2404.00221