Forward and backward state abstractions for off-policy evaluation
Meiling Hao,
Pingfan Su,
Liyuan Hu,
Zoltan Szabo,
Qianyu Zhao and
Chengchun Shi
LSE Research Online Documents on Economics from London School of Economics and Political Science, LSE Library
Abstract:
Off-policy evaluation (OPE) is crucial for evaluating a target policy’s impact offline before its deployment. However, achieving accurate OPE in large state spaces remains challenging. This paper studies state abstractions – originally designed for policy learning – in the context of OPE. Our contributions are three-fold: (i) We define a set of irrelevance conditions central to learning state abstractions for OPE. (ii) We derive sufficient conditions for achieving irrelevance in Q-functions and marginalized importance sampling ratios, the latter obtained by constructing a time-reversed Markov decision process (MDP) based on the observed MDP. (iii) We propose a novel two-step procedure that sequentially projects the original state space into a smaller space, which substantially simplify the sample complexity of OPE arising from high cardinality.
JEL-codes: C1 (search for similar items in EconPapers)
Pages: 42 pages
Date: 2024-06-27
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://eprints.lse.ac.uk/124074/ Open access version. (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:ehl:lserod:124074
Access Statistics for this paper
More papers in LSE Research Online Documents on Economics from London School of Economics and Political Science, LSE Library LSE Library Portugal Street London, WC2A 2HD, U.K.. Contact information at EDIRC.
Bibliographic data for series maintained by LSERO Manager ().