EconPapers    
Economics at your fingertips  
 

Optimal Policy Learning for Multi-Action Treatment with Risk Preference using Stata

Giovanni Cerulli

Papers from arXiv.org

Abstract: This paper presents the Stata community-distributed command "opl_ma_fb" (and the companion command "opl_ma_vf"), for implementing the first-best Optimal Policy Learning (OPL) algorithm to estimate the best treatment assignment given the observation of an outcome, a multi-action (or multi-arm) treatment, and a set of observed covariates (features). It allows for different risk preferences in decision-making (i.e., risk-neutral, linear risk-averse, and quadratic risk-averse), and provides a graphical representation of the optimal policy, along with an estimate of the maximal welfare (i.e., the value-function estimated at optimal policy) using regression adjustment (RA), inverse-probability weighting (IPW), and doubly robust (DR) formulas.

Date: 2025-09
References: Add references at CitEc
Citations:

Downloads: (external link)
http://arxiv.org/pdf/2509.06851 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2509.06851

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2025-09-09
Handle: RePEc:arx:papers:2509.06851