EconPapers    
Economics at your fingertips  
 

Towards Explainable Deep Learning in Computational Neuroscience: Visual and Clinical Applications

Asif Mehmood, Faisal Mehmood () and Jungsuk Kim ()
Additional contact information
Asif Mehmood: Department of Biomedical Engineering, College of IT Convergence, Gachon University, Sujeong-gu, Seongnam-si 13120, Republic of Korea
Faisal Mehmood: Department of AI and Software, College of IT Convergence, Gachon University, Sujeong-gu, Seongnam-si 13120, Republic of Korea
Jungsuk Kim: Department of Biomedical Engineering, College of IT Convergence, Gachon University, Sujeong-gu, Seongnam-si 13120, Republic of Korea

Mathematics, 2025, vol. 13, issue 20, 1-38

Abstract: Deep learning has emerged as a powerful tool in computational neuroscience, enabling the modeling of complex neural processes and supporting data-driven insights into brain function. However, the non-transparent nature of many deep learning models limits their interpretability, which is a significant barrier in neuroscience and clinical contexts where trust, transparency, and biological plausibility are essential. This review surveys structured explainable deep learning methods, such as saliency maps, attention mechanisms, and model-agnostic interpretability frameworks, that bridge the gap between performance and interpretability. We then explore explainable deep learning’s role in visual neuroscience and clinical neuroscience. By surveying literature and evaluating strengths and limitations, we highlight explainable models’ contribution to both scientific understanding and ethical deployment. Challenges such as balancing accuracy, complexity and interpretability, absence of standardized metrics, and scalability are assessed. Finally, we propose future directions, which include integrating biological priors, implementing standardized benchmarks, and incorporating human-intervention systems. The research study highlights the position of explainable deep learning, not only as a technical advancement but represents it as a necessary paradigm for transparent, responsible, auditable, and effective computational neuroscience. In total, 177 studies were reviewed as per PRISMA, which provided evidence across both visual and clinical computational neuroscience domains.

Keywords: explainable deep learning; computational neuroscience; visual attention; clinical applications; neural networks (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/13/20/3286/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/20/3286/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:20:p:3286-:d:1771092

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-10-15
Handle: RePEc:gam:jmathe:v:13:y:2025:i:20:p:3286-:d:1771092