EconPapers    
Economics at your fingertips  
 

Swift Transfer of Lactating Piglet Detection Model Using Semi-Automatic Annotation Under an Unfamiliar Pig Farming Environment

Qi’an Ding (), Fang Zheng, Luo Liu, Peng Li and Mingxia Shen
Additional contact information
Qi’an Ding: College of Intelligent Manufacturing, Anhui Science and Technology University, Chuzhou 233100, China
Fang Zheng: Key Laboratory of Smart Farming Technology for Agricultural Animals, Ministry of Agriculture and Rural Affairs, Wuhan 430070, China
Luo Liu: College of Veterinary Medicine, Nanjing Agricultural University, Nanjing 210014, China
Peng Li: College of Engineering, Nanjing Agricultural University, Nanjing 210031, China
Mingxia Shen: College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China

Agriculture, 2025, vol. 15, issue 7, 1-20

Abstract: Manual annotation of piglet imagery across varied farming environments is labor-intensive. To address this, we propose a semi-automatic approach within an active learning framework that integrates a pre-annotation model for piglet detection. We further examine how data sample composition influences pre-annotation efficiency to enhance the deployment of lactating piglet detection models. Our study utilizes original samples from pig farms in Jingjiang, Suqian, and Sheyang, along with new data from the Yinguang pig farm in Danyang. Using the YOLOv5 framework, we constructed both single and mixed training sets of piglet images, evaluated their performance, and selected the optimal pre-annotation model. This model generated bounding box coordinates on processed new samples, which were subsequently manually refined to train the final model. Results indicate that expanding the dataset and diversifying pigpen scenes significantly improve pre-annotation performance. The best model achieved a test precision of 0.921 on new samples, and after manual calibration, the final model exhibited a training precision of 0.968, a recall of 0.952, and an average precision of 0.979 at the IoU threshold of 0.5. The model demonstrated robust detection under various lighting conditions, with bounding boxes closely conforming to piglet contours, thereby substantially reducing manual labor. This approach is cost-effective for piglet segmentation tasks and offers strong support for advancing smart agricultural technologies.

Keywords: active learning; frame different; pre-annotation; YOLOv5; smart agriculture (search for similar items in EconPapers)
JEL-codes: Q1 Q10 Q11 Q12 Q13 Q14 Q15 Q16 Q17 Q18 (search for similar items in EconPapers)
Date: 2025
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2077-0472/15/7/696/pdf (application/pdf)
https://www.mdpi.com/2077-0472/15/7/696/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jagris:v:15:y:2025:i:7:p:696-:d:1620259

Access Statistics for this article

Agriculture is currently edited by Ms. Leda Xuan

More articles in Agriculture from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-04-05
Handle: RePEc:gam:jagris:v:15:y:2025:i:7:p:696-:d:1620259