A Finite Time Analysis of Temporal Difference Learning with Linear Function Approximation
Jalaj Bhandari (),
Daniel Russo () and
Raghav Singal ()
Additional contact information
Jalaj Bhandari: Operations Research, Columbia University, New York, New York 10027
Daniel Russo: Graduate School of Business, Columbia University, New York, New York 10027
Raghav Singal: Operations Research, Columbia University, New York, New York 10027
Operations Research, 2021, vol. 69, issue 3, 950-973
Abstract:
Temporal difference learning (TD) is a simple iterative algorithm used to estimate the value function corresponding to a given policy in a Markov decision process. Although TD is one of the most widely used algorithms in reinforcement learning, its theoretical analysis has proved challenging and few guarantees on its statistical efficiency are available. In this work, we provide a simple and explicit finite time analysis of temporal difference learning with linear function approximation. Except for a few key insights, our analysis mirrors standard techniques for analyzing stochastic gradient descent algorithms and therefore inherits the simplicity and elegance of that literature. Final sections of the paper show how all of our main results extend to the study of TD learning with eligibility traces, known as TD( λ ), and to Q-learning applied in high-dimensional optimal stopping problems.
Keywords: dynamic programming/optimal control; decision analysis: sequential; Machine Learning and Data Science; reinforcement learning; temporal difference learning; finite time analysis; stochastic gradient descent (search for similar items in EconPapers)
Date: 2021
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://dx.doi.org/10.1287/opre.2020.2024 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:69:y:2021:i:3:p:950-973
Access Statistics for this article
More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().