EconPapers    
Economics at your fingertips  
 

Optimal Rule-Interposing Reinforcement Learning-Based Energy Management of Series—Parallel-Connected Hybrid Electric Vehicles

Lihong Dai, Peng Hu, Tianyou Wang, Guosheng Bian and Haoye Liu ()
Additional contact information
Lihong Dai: State Key Laboratory of Engines, Tianjin University, Tianjin 300072, China
Peng Hu: Chery Jetour Automobile Co., Ltd., Wuhu 241100, China
Tianyou Wang: State Key Laboratory of Engines, Tianjin University, Tianjin 300072, China
Guosheng Bian: KUNTYE Vehicle System Co., Ltd., Tongling 213025, China
Haoye Liu: State Key Laboratory of Engines, Tianjin University, Tianjin 300072, China

Sustainability, 2024, vol. 16, issue 16, 1-17

Abstract: P2–P3 series–parallel hybrid electric vehicles exhibit complex configurations with multiple power sources and operational modes, presenting a difficulty in developing efficient energy management strategies. This paper takes a P2–P3 series–parallel hybrid power system-KunTye 2DHT system as the research object and proposes a deep reinforcement learning framework based on pre-optimized energy management to improve the energy consumption performance of the hybrid electric vehicles. Firstly, a control-oriented model is established based on its system configuration and characteristics. Then, the optimal distribution of the motor energy under different operating modes is pre-optimized, which aims to reduce the energy management task’s dimensionality by equating two motors as an equivalent motor. Subsequently, based on real-time traffic information under connected conditions, deep reinforcement learning is utilized to optimize the optimal operating modes of the hybrid system and the optimal distribution between the engine and equivalent motors. Combining the pre-optimized results, the optimal energy distribution between the engine and the two motors in the system is achieved. Finally, performance comparisons are made between the predictive control and the traditional Dynamic Programming and Adaptive Equivalent Consumption Minimization Strategy, revealing the proposed optimization algorithm’s promising potential in reducing fuel consumption.

Keywords: series–parallel HEVs; energy management; reinforcement learning; connected environment (search for similar items in EconPapers)
JEL-codes: O13 Q Q0 Q2 Q3 Q5 Q56 (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2071-1050/16/16/6848/pdf (application/pdf)
https://www.mdpi.com/2071-1050/16/16/6848/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jsusta:v:16:y:2024:i:16:p:6848-:d:1453345

Access Statistics for this article

Sustainability is currently edited by Ms. Alexandra Wu

More articles in Sustainability from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jsusta:v:16:y:2024:i:16:p:6848-:d:1453345