Efficient Deployment of Peanut Leaf Disease Detection Models on Edge AI Devices
Zekai Lv,
Shangbin Yang,
Shichuang Ma,
Qiang Wang,
Jinti Sun,
Linlin Du,
Jiaqi Han,
Yufeng Guo and
Hui Zhang ()
Additional contact information
Zekai Lv: College of Information and Management Science, Henan Agricultural University, Zhengzhou 450046, China
Shangbin Yang: College of Information and Management Science, Henan Agricultural University, Zhengzhou 450046, China
Shichuang Ma: College of Information and Management Science, Henan Agricultural University, Zhengzhou 450046, China
Qiang Wang: College of Information and Management Science, Henan Agricultural University, Zhengzhou 450046, China
Jinti Sun: College of Information and Management Science, Henan Agricultural University, Zhengzhou 450046, China
Linlin Du: College of Information and Management Science, Henan Agricultural University, Zhengzhou 450046, China
Jiaqi Han: College of Information and Management Science, Henan Agricultural University, Zhengzhou 450046, China
Yufeng Guo: College of Information and Management Science, Henan Agricultural University, Zhengzhou 450046, China
Hui Zhang: College of Information and Management Science, Henan Agricultural University, Zhengzhou 450046, China
Agriculture, 2025, vol. 15, issue 3, 1-21
Abstract:
The intelligent transformation of crop leaf disease detection has driven the use of deep neural network algorithms to develop more accurate disease detection models. In resource-constrained environments, the deployment of crop leaf disease detection models on the cloud introduces challenges such as communication latency and privacy concerns. Edge AI devices offer lower communication latency and enhanced scalability. To achieve the efficient deployment of crop leaf disease detection models on edge AI devices, a dataset of 700 images depicting peanut leaf spot, scorch spot, and rust diseases was collected. The YOLOX-Tiny network was utilized to conduct deployment experiments with the peanut leaf disease detection model on the Jetson Nano B01. The experiments initially focused on three aspects of efficient deployment optimization: the fusion of rectified linear unit (ReLU) and convolution operations, the integration of Efficient Non-Maximum Suppression for TensorRT (EfficientNMS_TRT) to accelerate post-processing within the TensorRT model, and the conversion of model formats from number of samples, channels, height, width (NCHW) to number of samples, height, width, and channels (NHWC) in the TensorFlow Lite model. Additionally, experiments were conducted to compare the memory usage, power consumption, and inference latency between the two inference frameworks, as well as to evaluate the real-time video detection performance using DeepStream. The results demonstrate that the fusion of ReLU activation functions with convolution operations reduced the inference latency by 55.5% compared to the use of the Sigmoid linear unit (SiLU) activation alone. In the TensorRT model, the integration of the EfficientNMS_TRT module accelerated post-processing, leading to a reduction in the inference latency of 19.6% and an increase in the frames per second (FPS) of 20.4%. In the TensorFlow Lite model, conversion to the NHWC format decreased the model conversion time by 88.7% and reduced the inference latency by 32.3%. These three efficient deployment optimization methods effectively decreased the inference latency and enhanced the inference efficiency. Moreover, a comparison between the two frameworks revealed that TensorFlow Lite exhibited memory usage reductions of 15% to 20% and power consumption decreases of 15% to 25% compared to TensorRT. Additionally, TensorRT achieved inference latency reductions of 53.2% to 55.2% relative to TensorFlow Lite. Consequently, TensorRT is deemed suitable for tasks requiring strong real-time performance and low latency, whereas TensorFlow Lite is more appropriate for scenarios with constrained memory and power resources. Additionally, the integration of DeepStream and EfficientNMS_TRT was found to optimize memory and power utilization, thereby enhancing the speed of real-time video detection. A detection rate of 28.7 FPS was achieved at a resolution of 1280 × 720. These experiments validate the feasibility and advantages of deploying crop leaf disease detection models on edge AI devices.
Keywords: YOLOX-Tiny; Jetson board; peanut leaf diseases; TensorRT; TensorFlow Lite; efficient deployment optimization (search for similar items in EconPapers)
JEL-codes: Q1 Q10 Q11 Q12 Q13 Q14 Q15 Q16 Q17 Q18 (search for similar items in EconPapers)
Date: 2025
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2077-0472/15/3/332/pdf (application/pdf)
https://www.mdpi.com/2077-0472/15/3/332/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jagris:v:15:y:2025:i:3:p:332-:d:1582510
Access Statistics for this article
Agriculture is currently edited by Ms. Leda Xuan
More articles in Agriculture from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().