Challenges in explaining deep learning models for data with biological variation
Lenka Tětková,
Erik Schou Dreier,
Robin Malm and
Lars Kai Hansen
PLOS ONE, 2025, vol. 20, issue 10, 1-20
Abstract:
Much machine learning research progress is based on developing models and evaluating them on a benchmark dataset (e.g., ImageNet for images). However, applying such benchmark-successful methods to real-world data often does not work as expected. This is particularly the case for biological data where we expect variability at multiple time and spatial scales. Typical benchmark data has simple, dominant semantics, such as a number, an object type, or a word. In contrast, biological samples often have multiple semantic components leading to complex and entangled signals. Complexity is added if the signal of interest is related to atypical states, e.g., disease, and if there is limited data available for learning.In this work, we focus on image classification of real-world biological data that are, indeed, different from standard images. We are using grain data and the goal is to detect diseases and damages, for example, “pink fusarium” and “skinned”. Pink fusarium, skinned grains, and other diseases and damages are key factors in setting the price of grains or excluding dangerous grains from food production. Apart from challenges stemming from differences of the data from the standard toy datasets, we also present challenges that need to be overcome when explaining deep learning models. For example, explainability methods have many hyperparameters that can give different results, and the ones published in the papers do not work on dissimilar images. Other challenges are more general: problems with visualization of the explanations and their comparison since the magnitudes of their values differ from method to method. An open fundamental question also is: How to evaluate explanations? It is a non-trivial task because the “ground truth” is usually missing or ill-defined. Also, human annotators may create what they think is an explanation of the task at hand, yet the machine learning model might solve it in a different and perhaps counter-intuitive way. We discuss several of these challenges and evaluate various post-hoc explainability methods on grain data. We focus on robustness, quality of explanations, and similarity to particular “ground truth” annotations made by experts. The goal is to find the methods that overall perform well and could be used in this challenging task. We hope that the proposed pipeline would be used as a framework for evaluating explainability methods in specific use cases.
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0333965 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 33965&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0333965
DOI: 10.1371/journal.pone.0333965
Access Statistics for this article
More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().