EconPapers    
Economics at your fingertips  
 

Model-free adaptive control design for nonlinear discrete-time processes with reinforcement learning techniques

Dong Liu and Guang-Hong Yang

International Journal of Systems Science, 2018, vol. 49, issue 11, 2298-2308

Abstract: This paper deals with the model-free adaptive control (MFAC) based on the reinforcement learning (RL) strategy for a family of discrete-time nonlinear processes. The controller is constructed based on the approximation ability of neural network architecture, a new actor-critic algorithm for neural network control problem is developed to estimate the strategic utility function and the performance index function. More specifically, the novel RL-based MFAC scheme is reasonable to design the controller without need to estimate y(k+1) information. Furthermore, based on Lyapunov stability analysis method, the closed-loop systems can be ensured uniformly ultimately bounded. Simulations are shown to validate the theoretical results.

Date: 2018
References: Add references at CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
http://hdl.handle.net/10.1080/00207721.2018.1498557 (text/html)
Access to full text is restricted to subscribers.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:taf:tsysxx:v:49:y:2018:i:11:p:2298-2308

Ordering information: This journal article can be ordered from
http://www.tandfonline.com/pricing/journal/TSYS20

DOI: 10.1080/00207721.2018.1498557

Access Statistics for this article

International Journal of Systems Science is currently edited by Visakan Kadirkamanathan

More articles in International Journal of Systems Science from Taylor & Francis Journals
Bibliographic data for series maintained by Chris Longhurst ().

 
Page updated 2025-03-20
Handle: RePEc:taf:tsysxx:v:49:y:2018:i:11:p:2298-2308