EconPapers    
Economics at your fingertips  
 

Benchmarking interpretability of deep learning for predictive genomics: Recall, precision, and variability of feature attribution

Justin Reynolds and Chongle Pan

PLOS Computational Biology, 2025, vol. 21, issue 12, 1-21

Abstract: Deep neural networks can model the nonlinear architecture of polygenic traits, yet the reliability of attribution methods to identify the genetic variants driving model predictions remains uncertain. We introduce a benchmarking framework that quantifies three aspects of interpretability: attribution recall, attribution precision, and stability, and apply it to deep learning models trained on UK Biobank genotypes for standing height prediction. After quality control, feed-forward neural networks were trained on more than half a million autosomal variants from approximately 300 thousand participants and evaluated using four attribution algorithms (Saliency, Gradient SHAP, DeepLIFT, Integrated Gradients) with and without SmoothGrad noise averaging. Attribution recall was assessed using synthetic spike-in variants with known additive, dominant, recessive, and epistatic effects, enabling direct measurement of sensitivity to diverse genetic architectures. Attribution precision estimated specificity using an equal number of null decoy variants that preserved allele structure while disrupting genotype-phenotype correspondence. Stability was measured by the consistency of variant-level attributions across an ensemble of independently trained models. SmoothGrad increased average recall across effect types by approximately 0.16 at the top 1% of the most highly attributed variants and improved average precision by about 0.06 at the same threshold, while stability remained comparable with median relative standard deviations of 0.4 to 0.5 across methods. Among the evaluated attribution methods, Saliency achieved the highest composite score, indicating that its simple gradient formulation provided the best overall balance of recall, precision, and stability.Author summary: Understanding which genetic variants contribute most significantly to complex traits such as human height is crucial for advancing genomic research. Deep neural networks (DNNs) offer powerful predictive capabilities for these traits, but their complexity makes it challenging to interpret which genetic features drive their predictions. In this study, we developed a comprehensive framework to objectively evaluate the reliability and biological relevance of several popular interpretation methods used with DNNs. Using genotype array data from the UK Biobank, we predicted individuals’ heights and systematically assessed four widely used interpretation algorithms: Saliency, Gradient SHAP, DeepLIFT, and Integrated Gradients. We found that interpretation performance was relatively consistent across individual attribution algorithms; however, performance generally improved for the more advanced gradient-based algorithms (i.e., Gradient SHAP, DeepLIFT, and Integrated Gradients) in the presence of SmoothGrad noise averaging. These results show that gradient-based interpretation methods can effectively recover both additive and more complex non-additive genetic signals while maintaining stable and selective feature attributions.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1013784 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 13784&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1013784

DOI: 10.1371/journal.pcbi.1013784

Access Statistics for this article

More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().

 
Page updated 2025-12-07
Handle: RePEc:plo:pcbi00:1013784