EconPapers    
Economics at your fingertips  
 

Heuristic action execution for energy efficient charge-sustaining control of connected hybrid vehicles with model-free double Q-learning

Bin Shuai, Quan Zhou, Ji Li, Yinglong He, Ziyang Li, Huw Williams, Hongming Xu and Shijin Shuai

Applied Energy, 2020, vol. 267, issue C, No S0306261920304128

Abstract: This paper investigates a model-free supervisory control methodology with double Q-learning for the hybrid vehicle in charge-sustaining scenarios. It aims to improve the vehicle’s energy efficiency continuously while maintaining the battery’s state-of-charge in real-world driving. Two new heuristic action execution policies, the max-value-based policy and the random policy, are proposed for the double Q-learning method to reduce overestimation of the merit-function values for each action in power-split control of the vehicle. Experimental studies based on software-in-the-loop (offline learning) and hardware-in-the-loop (online learning) platforms are carried out to explore the potential of energy-saving in four driving cycles defined with real-world vehicle operations. The results from 35 rounds of offline undisturbed learning show that the heuristic action execution policies can improve the learning performance of conventional double Q-learning by achieving at least 1.09% higher energy efficiency. The proposed methods achieve similar results obtained by dynamic programming, but they have the capability of real-time online application. Double Q-learnings are shown more robust to turbulence during the disturbed learning: they realise at least three times improvement in energy efficiency compared to the standard Q-learning. Random execution policy achieves 1.18% higher energy efficiency than the max-value-based policy for the same driving condition. Significant tests show that deciding factor in the random execution policy has little impact on learning performance. By implementing the control strategies for online learning, the proposed model-free control method can save energy by more than 4.55% in the predefined real-world driving conditions compared to the method using standard Q-learning.

Keywords: Energy efficiency optimisation; Charge-sustaining control; Hybrid vehicle; Reinforcement learning; Double Q-learning (search for similar items in EconPapers)
Date: 2020
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (13)

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0306261920304128
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:appene:v:267:y:2020:i:c:s0306261920304128

Ordering information: This journal article can be ordered from
http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/bibliographic
http://www.elsevier. ... 405891/bibliographic

DOI: 10.1016/j.apenergy.2020.114900

Access Statistics for this article

Applied Energy is currently edited by J. Yan

More articles in Applied Energy from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-03-19
Handle: RePEc:eee:appene:v:267:y:2020:i:c:s0306261920304128