Control of superheat of organic Rankine cycle under transient heat source based on deep reinforcement learning
Xuan Wang,
Rui Wang,
Ming Jin,
Gequn Shu,
Hua Tian and
Jiaying Pan
Applied Energy, 2020, vol. 278, issue C, No S0306261920311399
Abstract:
The organic Rankine cycle (ORC) is a promising technology for engine waste heat recovery. During real-world operation, the engine working condition varies frequently to satisfy the power demand; thus, the transient nature of engine waste heat presents significant control challenges for the ORC. To control the superheat of the ORC precisely under a transient heat source, several optimal control methods have been used such as model predictive control and dynamic programing. However, most of them depend strongly on the accurate prediction of future disturbances. Deep reinforcement learning (DRL) is an artificial-intelligence algorithm that can overcome the aforementioned disadvantage, but the potential of DRL in control of thermodynamic systems has not yet been investigated. Thus, this paper proposes two DRL-based control methods for controlling the superheat of ORC under a transient heat source. One directly uses the DRL agent to learn the control strategy (DRL control), and the other uses the DRL agent to optimize the parameters of the proportional–integral–derivative (PID) controller (DRL-based PID control). Additionally, a switching mechanism between different DRL controllers is proposed for improving the training efficiency and enlarging the operation range of the controller. The results of this study indicate that the DRL agent can satisfactorily perform the control task and optimize the traditional controller under the trained and untrained transient heat source. Specifically, the DRL control can track the reference superheat with an average error of only 0.19 K, whereas that of the traditional PID control is 2.16 K. Furthermore, the proposed switching DRL control exhibits excellent tracking performance with an average error of only 0.21 K and robustness over a wide range of operation conditions. The successful application of DRL demonstrates its considerable potential for the control of thermodynamic systems, providing a useful reference and motivation for the application to other thermodynamic systems.
Keywords: Organic Rankine cycle; Deep reinforcement learning; Superheat control; Artificial intelligence; Internal combustion engine (search for similar items in EconPapers)
Date: 2020
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (19)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0306261920311399
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:appene:v:278:y:2020:i:c:s0306261920311399
Ordering information: This journal article can be ordered from
http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/bibliographic
http://www.elsevier. ... 405891/bibliographic
DOI: 10.1016/j.apenergy.2020.115637
Access Statistics for this article
Applied Energy is currently edited by J. Yan
More articles in Applied Energy from Elsevier
Bibliographic data for series maintained by Catherine Liu ().