EconPapers    
Economics at your fingertips  
 

Reinforcement learning-based intelligent energy management architecture for hybrid construction machinery

Wei Zhang, Jixin Wang, Yong Liu, Guangzong Gao, Siwen Liang and Hongfeng Ma

Applied Energy, 2020, vol. 275, issue C, No S0306261920309132

Abstract: Power allocation is of crucial significance to energy management system in the hybrid construction machinery (HCM). Most of the existing HCM energy management strategies are only formulated based on the predefined rules, which causes the system unable to adapt to the changeable and complicated working conditions, thus seriously limiting the energy saving potential of hybrid technology. In this paper, we build a reinforcement learning-based intelligent energy management architecture for HCM. Given the working conditions and operating characteristics of HCM, a Q-function updating method combining direct learning and indirect learning is proposed to enhance the performance and practicability of reinforcement learning. A virtual world model (VWM) is introduced to approximate the real-world environment and facilitate the identification of data-driven environment, so as to enhance the real-time performance and adaptability of the architecture. Based on the characteristics of HCM working conditions, the load cycle is subdivided, and the stationary Markov chain is employed to yield real-time transfer probability matrices of required power to accelerate the updating of the environment model. An HCM experiment platform is built, in which the typical signal of working condition is sampled for simulation. The results indicate that DYNA-Q based architecture outperforms Q-learning and rule-based strategy (RBS) in terms of adaptivity, real-time performance and optimality. The results also demonstrate that with the proposed architecture, the working condition of internal combustion engine (ICE) and the charge-discharge of ultracapacitor are more rational and efficient.

Keywords: Hybrid construction machinery; Energy management; Reinforcement learning; Dyna-Q learning; Virtual world model (search for similar items in EconPapers)
Date: 2020
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (6)

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0306261920309132
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:appene:v:275:y:2020:i:c:s0306261920309132

Ordering information: This journal article can be ordered from
http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/bibliographic
http://www.elsevier. ... 405891/bibliographic

DOI: 10.1016/j.apenergy.2020.115401

Access Statistics for this article

Applied Energy is currently edited by J. Yan

More articles in Applied Energy from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-03-19
Handle: RePEc:eee:appene:v:275:y:2020:i:c:s0306261920309132