EconPapers    
Economics at your fingertips  
 

Continuous reinforcement learning of energy management with deep Q network for a power split hybrid electric bus

Jingda Wu, Hongwen He, Jiankun Peng, Yuecheng Li and Zhanjiang Li

Applied Energy, 2018, vol. 222, issue C, 799-811

Abstract: Reinforcement learning is a new research hotspot in the artificial intelligence community. Q learning as a famous reinforcement learning algorithm can achieve satisfactory control performance without need to clarify the complex internal factors in controlled objects. However, discretization state is necessary which limits the application of Q learning in energy management for hybrid electric bus (HEB). In this paper the deep Q learning (DQL) is adopted for energy management issue and the strategy is proposed and verified. Firstly, the system modeling of bus configuration are described. Then, the energy management strategy based on deep Q learning is put forward. Deep neural network is employed and well trained to approximate the action value function (Q function). Furthermore, the Q learning strategy based on the same model is mentioned and applied to compare with deep Q learning. Finally, a part of trained decision network is analyzed separately to verify the effectiveness and rationality of the DQL-based strategy. The training results indicate that DQL-based strategy makes a better performance than that of Q learning in training time consuming and convergence rate. Results also demonstrate the fuel economy of proposed strategy under the unknown driving condition achieves 89% of dynamic programming-based method. In addition, the technique can finally learn to the target state of charge under different initial conditions. The main contribution of this study is to explore a novel reinforcement learning methodology into energy management for HEB which solve the curse of state variable dimensionality, and the techniques can be adopted to solve similar problems.

Keywords: Energy management strategy; Continuous reinforcement learning; Deep Q learning; Dynamic programming; Hybrid electric bus (search for similar items in EconPapers)
Date: 2018
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (82)

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0306261918304422
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:appene:v:222:y:2018:i:c:p:799-811

Ordering information: This journal article can be ordered from
http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/bibliographic
http://www.elsevier. ... 405891/bibliographic

DOI: 10.1016/j.apenergy.2018.03.104

Access Statistics for this article

Applied Energy is currently edited by J. Yan

More articles in Applied Energy from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-03-19
Handle: RePEc:eee:appene:v:222:y:2018:i:c:p:799-811