EconPapers    
Economics at your fingertips  
 

Reinforcement Learning with Guarantees

Mario Zanon () and Sébastien Gros ()
Additional contact information
Mario Zanon: IMT School for Advanced Studies Lucca
Sébastien Gros: Norwegian University of Science and Technology (NTNU)

Chapter Chapter 8 in Model Predictive Control, 2025, pp 191-224 from Springer

Abstract: Abstract Markov Decision Processes formalize many problems of interest and have been tackled using a variety of techniques, including Reinforcement Learning (RL) and Model Predictive Control (MPC). While each approach has both advantages and disadvantages, RL and MPC have been very successful in the respective domains. RL makes it possible to obtain optimality for the real system, without the need for a model. MPC requires a model, but makes it possible to provide strict stability and safety guarantees, as well as to promote explainability. In this regard, the two techniques are complementary, and this chapter focuses on how they can be combined in order to leverage the advantages of both.

Keywords: Reinforcement learning; Model predictive control; Markov decision process; Optimality (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:dymchp:978-3-031-85256-5_8

Ordering information: This item can be ordered from
http://www.springer.com/9783031852565

DOI: 10.1007/978-3-031-85256-5_8

Access Statistics for this chapter

More chapters in Dynamic Modeling and Econometrics in Economics and Finance from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-06-08
Handle: RePEc:spr:dymchp:978-3-031-85256-5_8