EconPapers    
Economics at your fingertips  
 

Risk and Optimal Policies in Bandit Experiments

Karun Adusumilli

Econometrica, 2025, vol. 93, issue 3, 1003-1029

Abstract: We provide a decision‐theoretic analysis of bandit experiments under local asymptotics. Working within the framework of diffusion processes, we define suitable notions of asymptotic Bayes and minimax risk for these experiments. For normally distributed rewards, the minimal Bayes risk can be characterized as the solution to a second‐order partial differential equation (PDE). Using a limit of experiments approach, we show that this PDE characterization also holds asymptotically under both parametric and non‐parametric distributions of the rewards. The approach further describes the state variables it is asymptotically sufficient to restrict attention to, and thereby suggests a practical strategy for dimension reduction. The PDEs characterizing minimal Bayes risk can be solved efficiently using sparse matrix routines or Monte Carlo methods. We derive the optimal Bayes and minimax policies from their numerical solutions. These optimal policies substantially dominate existing methods such as Thompson sampling; the risk of the latter is often twice as high.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://doi.org/10.3982/ECTA21075

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:wly:emetrp:v:93:y:2025:i:3:p:1003-1029

Ordering information: This journal article can be ordered from
https://www.economet ... ordering-back-issues

Access Statistics for this article

Econometrica is currently edited by Guido W. Imbens

More articles in Econometrica from Econometric Society Contact information at EDIRC.
Bibliographic data for series maintained by Wiley Content Delivery ().

 
Page updated 2025-07-02
Handle: RePEc:wly:emetrp:v:93:y:2025:i:3:p:1003-1029