TensorRT Powered Model for Ultra-Fast Li-Ion Battery Capacity Prediction on Embedded Devices
Chunxiang Zhu (),
Jiacheng Qian and
Mingyu Gao
Additional contact information
Chunxiang Zhu: College of Engineering Training Centre, China Jiliang University, Hangzhou 310018, China
Jiacheng Qian: College of Engineering Training Centre, China Jiliang University, Hangzhou 310018, China
Mingyu Gao: School of Electronics and Information Engineering, Hangzhou Dianzi University, Hangzhou 310018, China
Energies, 2024, vol. 17, issue 12, 1-18
Abstract:
The LSTM neural network is often employed for time-series data prediction due to its strong nonlinear mapping capability and memory effect, allowing for better identification of complex data characteristics. However, the large computational workload required by neural networks can result in longer prediction times, making deployment on time-sensitive embedded devices challenging. To address this, TensorRT, a software development kit for NVIDIA hardware platforms, offers optimized network structures and reduced inference times for deep learning inference applications. Though TensorRT inference is GPU-based like other deep learning frameworks, TensorRT outperforms comparable frameworks in terms of inference speed. In this paper, we compare the inference time consumption and prediction deviation of various approaches on CPU, GPU, and TensorRT, while also exploring the effects of different quantization approaches. Our experiments demonstrate the accuracy and inference latency of the same model on the FPGA development board PYNQ-Z1 as well, though the best results were obtained using NVIDIA Jetson Xavier NX. The results show an approximately 50× improvement in inference speed compared to our previous technique, with only a 0.2% increase in Mean Absolute Percentage Error (MAPE). These works highlight the effectiveness and efficiency of TensorRT in reducing inference times, making it an excellent choice for time-sensitive embedded device deployments that require high precision and low latency.
Keywords: optimization; TensorRT; deep learning; long short-term memory; state of health (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/1996-1073/17/12/2797/pdf (application/pdf)
https://www.mdpi.com/1996-1073/17/12/2797/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:17:y:2024:i:12:p:2797-:d:1410433
Access Statistics for this article
Energies is currently edited by Ms. Agatha Cao
More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().