A Deep Learning Framework for Detecting Cross-Generational Facial Markers Associated with Stress in Pigs
Syed U. Yunas (),
Ajmal Shahbaz,
Emma M. Baxter,
Kenneth M. D. Rutherford,
Mark F. Hansen,
Melvyn L. Smith and
Lyndon N. Smith
Additional contact information
Syed U. Yunas: Centre for Machine Vision, University of the West of England (UWE), Bristol BS16 1QY, UK
Ajmal Shahbaz: Centre for Machine Vision, University of the West of England (UWE), Bristol BS16 1QY, UK
Emma M. Baxter: Scotland’s Rural College (SRUC), Edinburgh EH9 3JG, UK
Kenneth M. D. Rutherford: Scotland’s Rural College (SRUC), Edinburgh EH9 3JG, UK
Mark F. Hansen: Centre for Machine Vision, University of the West of England (UWE), Bristol BS16 1QY, UK
Melvyn L. Smith: Centre for Machine Vision, University of the West of England (UWE), Bristol BS16 1QY, UK
Lyndon N. Smith: Centre for Machine Vision, University of the West of England (UWE), Bristol BS16 1QY, UK
Agriculture, 2025, vol. 15, issue 21, 1-15
Abstract:
Maternal stress during gestation can alter offspring physiology, behaviour, and immune function. In pigs, such ‘prenatal stress’ is known to increase stress sensitivity, but the potential to automatically detect such sensitivity has remained unexplored. Automatic detection of facial expression has successfully identified differences in pigs dependent on their stress status. This study progresses this work by demonstrating that, for the first time, using a deep learning framework applied to facial analysis, stress-linked phenotypes can be learned from one generation and detected in the next. Using a dataset of over 7000 facial images from 18 gestating sows and 53 of their daughters, we trained and evaluated five state-of-the-art deep learning architectures across six independent daughter cohorts. Attention-based models significantly outperformed CNN-based models, with the Vision Transformer (ViT) model achieving a mean accuracy of 0.78 and an average F1-score of 0.76. Grad-CAM visualisations showed that the ViT consistently attended to biologically relevant facial regions, such as the eyes and snout, whereas CNNs often focused on diffuse or non-informative areas, resulting in reduced low-stress recall and greater batch sensitivity. Models trained on maternal facial images successfully predicted stress responsiveness in daughters from unrelated lineages, indicating that the model captured generalisable facial cues of stress rather than familial resemblance. This approach supports previous work showing that machine vision can detect putatively stress-related alterations to facial expression in pigs. Future application of this approach could offer a scalable, non-invasive tool for early detection of stress in livestock production systems, opening new avenues for welfare-oriented precision livestock management and informed breeding strategies aimed at improving stress resilience.
Keywords: stress biomarkers; facial expression analysis; computer vision; deep learning; pig welfare; precision livestock farming (search for similar items in EconPapers)
JEL-codes: Q1 Q10 Q11 Q12 Q13 Q14 Q15 Q16 Q17 Q18 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2077-0472/15/21/2253/pdf (application/pdf)
https://www.mdpi.com/2077-0472/15/21/2253/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jagris:v:15:y:2025:i:21:p:2253-:d:1782011
Access Statistics for this article
Agriculture is currently edited by Ms. Leda Xuan
More articles in Agriculture from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().