FMA-Net: Fusion of Multi-Scale Attention for Grading Cervical Precancerous Lesions
Zhuoran Duan,
Chao Xu (),
Zhengping Li,
Bo Feng and
Chao Nie
Additional contact information
Zhuoran Duan: School of Integrated Ciruits, Anhui University, Hefei 230601, China
Chao Xu: School of Integrated Ciruits, Anhui University, Hefei 230601, China
Zhengping Li: School of Integrated Ciruits, Anhui University, Hefei 230601, China
Bo Feng: School of Integrated Ciruits, Anhui University, Hefei 230601, China
Chao Nie: School of Integrated Ciruits, Anhui University, Hefei 230601, China
Mathematics, 2024, vol. 12, issue 7, 1-17
Abstract:
Cervical cancer, as the fourth most common cancer in women, poses a significant threat to women’s health. Vaginal colposcopy examination, as the most cost-effective step in cervical cancer screening, can effectively detect precancerous lesions and prevent their progression into cancer. The size of the lesion areas in the colposcopic images varies, and the characteristics of the lesions are complex and difficult to discern, thus heavily relying on the expertise of the medical professionals. To address these issues, this paper constructs a vaginal colposcopy image dataset, ACIN-3, and proposes a Fusion Multi-scale Attention Network for the detection of cervical precancerous lesions. First, we propose a heterogeneous receptive field convolution module to construct the backbone network, which utilizes combinations of convolutions with different structures to extract multi-scale features from multiple receptive fields and capture features from different-sized regions of the cervix at different levels. Second, we propose an attention fusion module to construct a branch network, which integrates multi-scale features and establishes connections in both the spatial and channel dimensions. Finally, we design a dual-threshold loss function and introduce positive and negative thresholds to improve sample weights and address the issue of data imbalance in the dataset. Multiple experiments are conducted on the ACIN-3 dataset to demonstrate the superior performance of our approach compared to some classical and recent advanced methods. Our method achieves an accuracy of 92.2% in grading and 94.7% in detection, with average AUCs of 0.9862 and 0.9878. Our heatmap illustrates the accuracy of our approach in focusing on the locations of lesions.
Keywords: cervix; precancerous lesions; multi-scale; attention; medical image analysis; deep learning (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/12/7/958/pdf (application/pdf)
https://www.mdpi.com/2227-7390/12/7/958/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:12:y:2024:i:7:p:958-:d:1362661
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().