EconPapers    
Economics at your fingertips  
 

Exploring reinforcement learning in process control: a comprehensive survey

N. Rajasekhar, T.K. Radhakrishnan and N. Samsudeen

International Journal of Systems Science, 2025, vol. 56, issue 14, 3528-3557

Abstract: Reinforcement Learning (RL) is a machine learning methodology that develops the capability to make sequential decisions in intricate issues using trial-and-error techniques. RL has become increasingly prevalent for decision-making and control tasks in diverse fields such as industrial processes, biochemical systems and energy management. This review paper presents a comprehensive examination of the development, models, algorithms and practical uses of RL, with a specific emphasis on its application in process control. The study examines the fundamental theories, methodology and applications of RL, classifying them into two categories: classical RL such as such as Markov decision processes (MDP) and deep RL viz., actor critic methods. RL is a topic of discussion in multiple process industries, such as industrial chemical process control, biochemical process control, energy systems, wastewater treatment and the oil and gas sector. Nevertheless, the paper also highlights challenges that hinder its larger acceptance, including the requirement for substantial computational resources, the complexity of simulating real-world settings and the challenge of guaranteeing the stability and resilience of RL algorithms in dynamic and unpredictable environments. RL has demonstrated significant promise, but more research is needed to fully integrate it into industrial and environmental systems in order to solve the current challenges.Abbreviations: AC: Actor critic; AI: Artificial intelligence; ANN: Artificial neural networks; A3C: Asynchronous advantage actor critic; CRL : Classical Reinforcement learning; CV : Controlled variable; DDPG : Deep deterministic policy gradient; DQN: Deep Q network; DRL: Deep reinforcement learning; DP: Dynamic programming; FOMDP: Fully observable Markov decision process; GRU: Gated recurrent unit; LQR: Linear quadratic regulator; LSTM: Long short-term memory; ML: Machine learning; MV : Manipulated variable; MC: Monte Carlo; MDP: Markov decision process; MPC: Model predictive controller; MIMO: Multi input multi output; PG: Policy gradient; PID: Proportional integral derivative; PPO: Proximal policy optimisation; RL: Reinforcement learning; PPO: Proximal policy optimisation; SAC: Soft actor critic; SISO: Single input single output; TD: Temporal difference; TRPO: Trust region policy optimisation; TD3: Twin delayed deep deterministic policy gradient.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://hdl.handle.net/10.1080/00207721.2025.2469821 (text/html)
Access to full text is restricted to subscribers.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:taf:tsysxx:v:56:y:2025:i:14:p:3528-3557

Ordering information: This journal article can be ordered from
http://www.tandfonline.com/pricing/journal/TSYS20

DOI: 10.1080/00207721.2025.2469821

Access Statistics for this article

International Journal of Systems Science is currently edited by Visakan Kadirkamanathan

More articles in International Journal of Systems Science from Taylor & Francis Journals
Bibliographic data for series maintained by Chris Longhurst ().

 
Page updated 2025-10-07
Handle: RePEc:taf:tsysxx:v:56:y:2025:i:14:p:3528-3557