Corruption-Robust Exploration in Episodic Reinforcement Learning
Thodoris Lykouris (),
Max Simchowitz (),
Aleksandrs Slivkins () and
Wen Sun ()
Additional contact information
Thodoris Lykouris: Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
Max Simchowitz: Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
Aleksandrs Slivkins: Microsoft Research Lab, New York, New York 10012
Wen Sun: Department of Computer Science, Cornell University, Ithaca, New York 14850
Mathematics of Operations Research, 2025, vol. 50, issue 2, 1277-1304
Abstract:
We initiate the study of episodic reinforcement learning (RL) under adversarial corruptions in both the rewards and the transition probabilities of the underlying system, extending recent results for the special case of multiarmed bandits. We provide a framework that modifies the aggressive exploration enjoyed by existing reinforcement learning approaches based on optimism in the face of uncertainty by complementing them with principles from action elimination. Importantly, our framework circumvents the major challenges posed by naively applying action elimination in the RL setting, as formalized by a lower bound we demonstrate. Our framework yields efficient algorithms that (a) attain near-optimal regret in the absence of corruptions and (b) adapt to unknown levels of corruption, enjoying regret guarantees that degrade gracefully in the total corruption encountered. To showcase the generality of our approach, we derive results for both tabular settings (where states and actions are finite) and linear Markov decision process settings (where the dynamics and rewards admit a linear underlying representation). Notably, our work provides the first sublinear regret guarantee that accommodates any deviation from purely independent and identically distributed transitions in the bandit-feedback model for episodic reinforcement learning.
Keywords: Primary: 68Q32; reinforcement learning; bandit feedback; exploration; robustness; regret (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://dx.doi.org/10.1287/moor.2021.0202 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:ormoor:v:50:y:2025:i:2:p:1277-1304
Access Statistics for this article
More articles in Mathematics of Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().