EconPapers    
Economics at your fingertips  
 

Sustainable energy-speed co-optimization for hybrid electric vehicles in dynamic car-following scenarios via multifunctional deep learning policy

He Tong, Liang Chu, Di Zhao, Zhuoran Hou and Zhiqi Guo

Energy, 2025, vol. 334, issue C

Abstract: Energy management strategies (EMSs) are crucial for improving hybrid electric vehicles' (HEVs) efficiency. To shift to a more sustainable energy-saving paradigm, EMS must integrate factors like speed planning. Pulse-and-Glide (PnG) offers a promising speed planning method for fuel efficiency but struggles to balance comfort and fuel economy, limiting its adoption. Additionally, existing studies often oversimplify target speed profiles, restricting PnG's effectiveness in dynamic, real-world scenarios. To address these issues, this paper proposes PnG-Chaser, a novel deep reinforcement learning (DRL)-based framework that synergizes EMS and adaptive PnG speed planning in dynamic car-following contexts. PnG-Chaser utilizes a neural network controller, trained via the Rainbow algorithm, supported by carefully designed reward functions and optimized hyperparameters. This data-driven framework interacts with the environment to generate control signals that optimize speed and energy management while ensuring safety and comfort. Experimental results demonstrate that PnG-Chaser achieves 90.29 % of the fuel efficiency of a Dynamic Programming (DP)-based optimal EMS benchmark in training and 90.87 % in testing conditions. It also outperforms traditional Proportional-Integral-Derivative (PID) control in safety and adaptability, showcasing significant energy savings (particularly in urban environments), while maintaining comparable comfort level and real-time responsiveness. Moreover, PnG-Chaser is validated under 13 diverse driving cycles, underscoring its exceptional robustness. Testing on a real-world dataset further demonstrates its superior performance compared to other state-of-the-art DRL-based EMSs, and confirms its promising potential for practical deployment.

Keywords: Energy management; Speed planning; Pulse-and-Glide; Hybrid electric vehicle; Deep reinforcement learning (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0360544225032645
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:energy:v:334:y:2025:i:c:s0360544225032645

DOI: 10.1016/j.energy.2025.137622

Access Statistics for this article

Energy is currently edited by Henrik Lund and Mark J. Kaiser

More articles in Energy from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-08-29
Handle: RePEc:eee:energy:v:334:y:2025:i:c:s0360544225032645