Embedding-Graph-Neural-Network for Transient NOx Emissions Prediction
Yun Chen,
Chengwei Liang,
Dengcheng Liu,
Qingren Niu,
Xinke Miao,
Guangyu Dong (),
Liguang Li,
Shanbin Liao,
Xiaoci Ni and
Xiaobo Huang
Additional contact information
Yun Chen: School of Automotive Studies, Tongji University, Shanghai 201804, China
Chengwei Liang: School of Automotive Studies, Tongji University, Shanghai 201804, China
Dengcheng Liu: Nanchang Automotive Institute of Intelligence & New Energy, Nanchang 330001, China
Qingren Niu: School of Automotive Studies, Tongji University, Shanghai 201804, China
Xinke Miao: School of Automotive Studies, Tongji University, Shanghai 201804, China
Guangyu Dong: School of Automotive Studies, Tongji University, Shanghai 201804, China
Liguang Li: School of Automotive Studies, Tongji University, Shanghai 201804, China
Shanbin Liao: Jiangling Motors Corporation, Nanchang 330001, China
Xiaoci Ni: School of Automotive Studies, Tongji University, Shanghai 201804, China
Xiaobo Huang: Jiangling Motors Corporation, Nanchang 330001, China
Energies, 2022, vol. 16, issue 1, 1-20
Abstract:
Recently, Acritical Intelligent (AI) methodologies such as Long and Short-term Memory (LSTM) have been widely considered promising tools for engine performance calibration, especially for engine emission performance prediction and optimization, and Transformer is also gradually applied to sequence prediction. To carry out high-precision engine control and calibration, predicting long time step emission sequences is required. However, LSTM has the problem of gradient disappearance on too long input and output sequences, and Transformer cannot reflect the dynamic features of historic emission information which derives from cycle-by-cycle engine combustion events, which leads to low accuracy and weak algorithm adaptability due to the inherent limitations of the encoder-decoder structure. In this paper, considering the highly nonlinear relation between the multi-dimensional engine operating parameters the engine emission data outputs, an Embedding-Graph-Neural-Network (EGNN) model was developed combined with self-attention mechanism for the adaptive graph generation part of the GNN to capture the relationship between the sequences, improve the ability of predicting long time step sequences, and reduce the number of parameters to simplify network structure. Then, a sensor embedding method was adopted to make the model adapt to the data characteristics of different sensors, so as to reduce the impact of experimental hardware on prediction accuracy. The experimental results show that under the condition of long-time step forecasting, the prediction error of our model decreased by 31.04% on average compared with five other baseline models, which demonstrates the EGNN model can potentially be used in future engine calibration procedures.
Keywords: LSTM; transformer; sparse graph attention; EGNN (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/1996-1073/16/1/3/pdf (application/pdf)
https://www.mdpi.com/1996-1073/16/1/3/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:16:y:2022:i:1:p:3-:d:1008560
Access Statistics for this article
Energies is currently edited by Ms. Agatha Cao
More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().