3D-ViT-UNet: 3D Vision transformer based Unet-like model for Volumetric Brain Tumor Segmentation
Sikandar Afridi,
Atif Jan,
Muhammad Abeer Irfan,
Muhammad Irfan Khattak and
Taimur Ahmed Khan
PLOS Digital Health, 2026, vol. 5, issue 3, 1-27
Abstract:
Accurate volumetric segmentation of 3D medical imaging modalities is critical for therapy planning and clinical diagnosis, particularly for brain tumor delineation. Traditional convolutional neural network (CNN)-based architectures face challenges while capturing global contextual information and modeling long-range dependencies in complex 3D volumetric data, limiting their segmentation performance. Transformer-based models have emerged as promising alternatives to CNNs for such tasks, addressing their limitations in capturing global spatial dependencies. We propose 3D-ViT-UNet, a novel U-shaped vision transformer (ViT)-based encoder-decoder architecture for end-to-end volumetric brain tumor segmentation. The model employs 3D Window Multi-Head Self-Attention (3D-W-MSA) to capture local features and a 3D Dilated-Window Multi-Head Self-Attention (3D-DW-MSA) to capture global features while reducing computational complexity. Moreover, for preserving absolute and relative positional information and preventing permutation equivalence limitation in transformers, a dynamic position encoding strategy is integrated. The proposed model demonstrates state-of-the-art (SOTA) performance for brain tumor segmentation on the BraTS 2020 dataset. It achieves a superior average Dice Similarity Coefficient (DSC) of 84.81% and a Hausdorff Distance (HD) of 4.87 mm with reduced computational complexity compared to existing methods. Also, an improvement in delineation of tumor boundaries and accurate segmentation across modalities is demonstrated through the qualitative results. Extensive quantitative and qualitative evaluations highlight the capability of 3D-ViT-UNet to achieve high accuracy with a smaller model size and lower FLOPs, making it an effective and efficient solution for clinical applications involving volumetric brain tumor segmentation.Author summary: Brain tumors have varying sizes and shapes across MRIs; therefore, their accurate volumetric segmentation is a challenging task before therapies and surgeries. It is a time-consuming manual task, and results can differ between experts. We present 3D-ViT-UNet, an end-to-end volumetric segmentation model that processes an MRI as a volume rather than independent slices. Our design combines two attention mechanisms: 3D window attention to capture fine local structure and 3D dilated-window attention to efficiently capture broader context for full tumor extent. To keep the correct spatial order of input 3D patches, we add a dynamic, input-dependent position encoding that adapts to each MRI scan. Our method achieved state-of-the-art performance with a DSC of 84.81% and an average HD95 of 4.87 mm on the BraTS 2020 dataset. This confirms that 3D-ViT-UNet is an effective and efficient solution for clinical applications, providing high segmentation accuracy with a smaller model size and reduced computational cost.
Date: 2026
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0001323 (text/html)
https://journals.plos.org/digitalhealth/article/fi ... 01323&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pdig00:0001323
DOI: 10.1371/journal.pdig.0001323
Access Statistics for this article
More articles in PLOS Digital Health from Public Library of Science
Bibliographic data for series maintained by digitalhealth ().