EconPapers    
Economics at your fingertips  
 

Enhanced Deep Learning for Robust Stress Classification in Sows from Facial Images

Syed U. Yunas (), Ajmal Shahbaz, Emma M. Baxter, Mark F. Hansen, Melvyn L. Smith and Lyndon N. Smith
Additional contact information
Syed U. Yunas: Centre for Machine Vision, University of the West of England (UWE), Bristol BS16 1QY, UK
Ajmal Shahbaz: Centre for Machine Vision, University of the West of England (UWE), Bristol BS16 1QY, UK
Emma M. Baxter: Scotland’s Rural College (SRUC), Edinburgh EH9 3JG, UK
Mark F. Hansen: Centre for Machine Vision, University of the West of England (UWE), Bristol BS16 1QY, UK
Melvyn L. Smith: Centre for Machine Vision, University of the West of England (UWE), Bristol BS16 1QY, UK
Lyndon N. Smith: Centre for Machine Vision, University of the West of England (UWE), Bristol BS16 1QY, UK

Agriculture, 2025, vol. 15, issue 15, 1-14

Abstract: Stress in pigs poses significant challenges to animal welfare and productivity in modern pig farming, contributing to increased antimicrobial use and the rise of antimicrobial resistance (AMR). This study involves stress classification in pregnant sows by exploring five deep learning models: ConvNeXt, EfficientNet_V2, MobileNet_V3, RegNet, and Vision Transformer (ViT). These models are used for stress detection from facial images, leveraging an expanded dataset. A facial image dataset of sows was collected at Scotland’s Rural College (SRUC) and the images were categorized into primiparous Low-Stressed (LS) and High-Stress (HS) groups based on expert behavioural assessments and cortisol level analysis. The selected deep learning models were then trained on this enriched dataset and their performance was evaluated using cross-validation on unseen data. The Vision Transformer (ViT) model outperformed the others across the dataset of annotated facial images, achieving an average accuracy of 0.75, an F1 score of 0.78 for high-stress detection, and consistent batch-level performance (up to 0.88 F1 score). These findings highlight the efficacy of transformer-based models for automated stress detection in sows, supporting early intervention strategies to enhance welfare, optimize productivity, and mitigate AMR risks in livestock production.

Keywords: pig welfare; facial expression analysis; vision transformer (ViT); stress detection; deep learning; convolutional neural networks (CNNs); sow behavior; animal emotion recognition; automated welfare monitoring; precision livestock farming (search for similar items in EconPapers)
JEL-codes: Q1 Q10 Q11 Q12 Q13 Q14 Q15 Q16 Q17 Q18 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2077-0472/15/15/1675/pdf (application/pdf)
https://www.mdpi.com/2077-0472/15/15/1675/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jagris:v:15:y:2025:i:15:p:1675-:d:1716220

Access Statistics for this article

Agriculture is currently edited by Ms. Leda Xuan

More articles in Agriculture from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-08-03
Handle: RePEc:gam:jagris:v:15:y:2025:i:15:p:1675-:d:1716220