The Role of Lookahead and Approximate Policy Evaluation in Reinforcement Learning with Linear Value Function Approximation
Anna Winnicki (),
Joseph Lubars (),
Michael Livesay () and
R. Srikant ()
Additional contact information
Anna Winnicki: Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, Illinois 61801; and Coordinated Science Laboratory, University of Illinois Urbana-Champaign, Urbana, Illinois 61801
Joseph Lubars: Sandia National Laboratories, Albuquerque, New Mexico 87123
Michael Livesay: Sandia National Laboratories, Albuquerque, New Mexico 87123
R. Srikant: Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, Illinois 61801; and Coordinated Science Laboratory, University of Illinois Urbana-Champaign, Urbana, Illinois 61801; and c3.ai Digital Transformation Institute, University of Illinois Urbana-Champaign, Urbana, Illinois 61801
Operations Research, 2025, vol. 73, issue 1, 139-156
Abstract:
Function approximation is widely used in reinforcement learning to handle the computational difficulties associated with very large state spaces. However, function approximation introduces errors that may lead to instabilities when using approximate dynamic programming techniques to obtain the optimal policy. Therefore, techniques such as lookahead for policy improvement and m -step rollout for policy evaluation are used in practice to improve the performance of approximate dynamic programming with function approximation. We quantitatively characterize the impact of lookahead and m -step rollout on the performance of approximate dynamic programming (DP) with function approximation. (i) Without a sufficient combination of lookahead and m -step rollout, approximate DP may not converge. (ii) Both lookahead and m -step rollout improve the convergence rate of approximate DP. (iii) Lookahead helps mitigate the effect of function approximation and the discount factor on the asymptotic performance of the algorithm. Our results are presented for two approximate DP methods: one that uses least-squares regression to perform function approximation and another that performs several steps of gradient descent of the least-squares objective in each iteration.
Keywords: Machine Learning and Data Science; Markov decision processes; dynamic programming (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://dx.doi.org/10.1287/opre.2022.0357 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:73:y:2025:i:1:p:139-156
Access Statistics for this article
More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().