EconPapers    
Economics at your fingertips  
 

Comparison of Deep Reinforcement Learning and PID Controllers for Automatic Cold Shutdown Operation

Daeil Lee, Seoryong Koo, Inseok Jang and Jonghyun Kim
Additional contact information
Daeil Lee: Department of Nuclear Engineering, Chosun University, Dong-gu, Gwangju 61452, Korea
Seoryong Koo: Korea Atomic Energy Research Institute, Yuseong-gu, Daejeon 34057, Korea
Inseok Jang: Korea Atomic Energy Research Institute, Yuseong-gu, Daejeon 34057, Korea
Jonghyun Kim: Department of Nuclear Engineering, Chosun University, Dong-gu, Gwangju 61452, Korea

Energies, 2022, vol. 15, issue 8, 1-25

Abstract: Many industries apply traditional controllers to automate manual control. In recent years, artificial intelligence controllers applied with deep-learning techniques have been suggested as advanced controllers that can achieve goals from many industrial domains, such as humans. Deep reinforcement learning (DRL) is a powerful method for these controllers to learn how to achieve their specific operational goals. As DRL controllers learn through sampling from a target system, they can overcome the limitations of traditional controllers, such as proportional-integral-derivative (PID) controllers. In nuclear power plants (NPPs), automatic systems can manage components during full-power operation. In contrast, startup and shutdown operations are less automated and are typically performed by operators. This study suggests DRL-based and PID-based controllers for cold shutdown operations, which are a part of startup operations. By comparing the suggested controllers, this study aims to verify that learning-based controllers can overcome the limitations of traditional controllers and achieve operational goals with minimal manipulation. First, to identify the required components, operational goals, and inputs/outputs of operations, this study analyzed the general operating procedures for cold shutdown operations. Then, PID- and DRL-based controllers are designed. The PID-based controller consists of PID controllers that are well-tuned using the Ziegler–Nichols rule. The DRL-based controller with long short-term memory (LSTM) is trained with a soft actor-critic algorithm that can reduce the training time by using distributed prioritized experience replay and distributed learning. The LSTM can process a plant time-series data to generate control signals. Subsequently, the suggested controllers were validated using an NPP simulator during the cold shutdown operation. Finally, this study discusses the operational performance by comparing PID- and DRL-based controllers.

Keywords: nuclear power plant; autonomous operation; artificial intelligence; deep reinforcement learning; soft actor-critic algorithm (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
https://www.mdpi.com/1996-1073/15/8/2834/pdf (application/pdf)
https://www.mdpi.com/1996-1073/15/8/2834/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:15:y:2022:i:8:p:2834-:d:792850

Access Statistics for this article

Energies is currently edited by Ms. Agatha Cao

More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jeners:v:15:y:2022:i:8:p:2834-:d:792850