EconPapers    
Economics at your fingertips  
 

Systematic Integration of Attention Modules into CNNs for Accurate and Generalizable Medical Image Classification

Zahid Ullah, Minki Hong, Tahir Mahmood and Jihie Kim ()
Additional contact information
Zahid Ullah: Department of Computer Science and Artificial Intelligence, Dongguk University, Seoul 04620, Republic of Korea
Minki Hong: Department of Computer Science and Artificial Intelligence, Dongguk University, Seoul 04620, Republic of Korea
Tahir Mahmood: Division of Electronics and Electrical Engineering, Dongguk University, Seoul 04620, Republic of Korea
Jihie Kim: Department of Computer Science and Artificial Intelligence, Dongguk University, Seoul 04620, Republic of Korea

Mathematics, 2025, vol. 13, issue 22, 1-27

Abstract: Deep learning has demonstrated significant promise in medical image analysis; however, standard CNNs frequently encounter challenges in detecting subtle and intricate features vital for accurate diagnosis. To address this limitation, we systematically integrated attention mechanisms into five commonly used CNN backbones: VGG16, ResNet18, InceptionV3, DenseNet121, and EfficientNetB5. Each network was modified using either a Squeeze-and-Excitation block or a hybrid Convolutional Block Attention Module, allowing for more effective recalibration of channel and spatial features. We evaluated these attention-augmented models on two distinct datasets: (1) a Products of Conception histopathological dataset containing four tissue categories, and (2) a brain tumor MRI dataset that includes multiple tumor subtypes. Across both datasets, networks enhanced with attention mechanisms consistently outperformed their baseline counterparts on all measured evaluation criteria. Importantly, EfficientNetB5 with hybrid attention achieved superior overall results, with notable enhancements in both accuracy and generalizability. In addition to improved classification outcomes, the inclusion of attention mechanisms also advanced feature localization, thereby increasing robustness across a range of imaging modalities. Our study established a comprehensive framework for incorporating attention modules into diverse CNN architectures and delineated their impact on medical image classification. These results provide important insights for the development of interpretable and clinically robust deep learning-driven diagnostic systems.

Keywords: squeeze and excitation; attention mechanism; convolutional neural networks; medical image classification (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/13/22/3728/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/22/3728/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:22:p:3728-:d:1799231

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-11-25
Handle: RePEc:gam:jmathe:v:13:y:2025:i:22:p:3728-:d:1799231