EconPapers    
Economics at your fingertips  
 

LiteFocus-YOLO: An Efficient Network for Identifying Dense Tassels in Field Environments

Heyang Wang, Jinghuan Hu, Yunlong Ji, Chong Peng, Yu Bao, Hang Zhu, Caocan Zhu, Mengchao Chen, Ye Mu () and Hongyu Guo ()
Additional contact information
Heyang Wang: College of Information Technology, Jilin Agricultural University, Changchun 130118, China
Jinghuan Hu: College of Information Technology, Jilin Agricultural University, Changchun 130118, China
Yunlong Ji: College of Information Technology, Jilin Agricultural University, Changchun 130118, China
Chong Peng: College of Electronic Science and Engineering, Jilin University, Changchun 130012, China
Yu Bao: School of Life Science, Changchun Normal University, Changchun 130032, China
Hang Zhu: College of Information Technology, Jilin Agricultural University, Changchun 130118, China
Caocan Zhu: College of Information Technology, Jilin Agricultural University, Changchun 130118, China
Mengchao Chen: College of Information Technology, Jilin Agricultural University, Changchun 130118, China
Ye Mu: College of Information Technology, Jilin Agricultural University, Changchun 130118, China
Hongyu Guo: College of Engineering and Technology, Jilin Agricultural University, Changchun 130118, China

Agriculture, 2025, vol. 15, issue 19, 1-24

Abstract: High-efficiency and precise detection of crop ears in the field is a core component of intelligent agricultural yield estimation. However, challenges such as overlapping ears caused by dense planting, complex background interference, and blurred boundaries of small targets severely limit the accuracy and practicality of existing detection models. This paper introduces LiteFocus-YOLO(LF-YOLO), an efficient small-object detection model. By synergistically enhancing feature expression through cross-scale texture optimization and attention mechanisms, it achieves high-precision identification of maize tassels and wheat ears. The model innovatively incorporates the following: The Lightweight Target-Aware Attention Module (LTAM) strengthens high-frequency feature expression for small targets while reducing background interference, enhancing robustness in densely occluded scenes. The Cross-Feature Fusion Module (CFFM) addresses semantic detail loss through deep-shallow feature fusion modulation, optimizing small target localization accuracy. The experiment validated performance on the drone-based maize tassel dataset. Results show that LF-YOLO achieved an mAP50 of 97.9%, with mAP50 scores of 94.6% and 95.7% on the publicly available maize tassel and wheat ear datasets, respectively. It achieves generalization across different crops while maintaining high accuracy and recall. Compared to current mainstream object detection models, LF-YOLO delivers higher precision at lower computational cost, providing efficient technical support for dense small object detection tasks in agricultural fields.

Keywords: maize; wheat; tassel; deep learning; target detection; attention mechanism; LF-YOLO (search for similar items in EconPapers)
JEL-codes: Q1 Q10 Q11 Q12 Q13 Q14 Q15 Q16 Q17 Q18 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2077-0472/15/19/2036/pdf (application/pdf)
https://www.mdpi.com/2077-0472/15/19/2036/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jagris:v:15:y:2025:i:19:p:2036-:d:1760446

Access Statistics for this article

Agriculture is currently edited by Ms. Leda Xuan

More articles in Agriculture from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-09-29
Handle: RePEc:gam:jagris:v:15:y:2025:i:19:p:2036-:d:1760446