EconPapers    
Economics at your fingertips  
 

Online Adaptive Optimal Control Based on Reinforcement Learning

Draguna Vrabie () and Frank Lewis ()
Additional contact information
Draguna Vrabie: Automation and Robotics Research Institute, University of Texas at Arlington
Frank Lewis: Automation and Robotics Research Institute, University of Texas at Arlington

A chapter in Optimization and Optimal Control, 2010, pp 309-323 from Springer

Abstract: Summary In this chapter a new online direct adaptive scheme is presented which converges to the optimal state feedback control solution for affine in the inputs nonlinear systems. The optimal control solution is obtained in a direct fashion, without system identification. The optimal adaptive control algorithm is derived in a continuous-time framework. The algorithm is an online approach to policy iterations based on an adaptive critic structure to find an approximate solution to the state feedback, infinite-horizon, optimal control problem.

Keywords: adaptive control; optimal control; dual control; dynamic programming (search for similar items in EconPapers)
Date: 2010
References: Add references at CitEc
Citations:

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:spochp:978-0-387-89496-6_16

Ordering information: This item can be ordered from
http://www.springer.com/9780387894966

DOI: 10.1007/978-0-387-89496-6_16

Access Statistics for this chapter

More chapters in Springer Optimization and Its Applications from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-04-01
Handle: RePEc:spr:spochp:978-0-387-89496-6_16