EconPapers    
Economics at your fingertips  
 

Multi-modality Medical (CT, MRI, Ultrasound Etc.) Image Fusion Using Machine Learning/Deep Learning

Chaitanya Krishna Kasaraneni (), Keerthi Guttikonda and Revanth Madamala
Additional contact information
Chaitanya Krishna Kasaraneni: Egen
Keerthi Guttikonda: Seshadri Rao Gudlavalleru Engineering College
Revanth Madamala: Tiktok, ByteDance Pvt. Ltd

A chapter in Machine Learning and Deep Learning Modeling and Algorithms with Applications in Medical and Health Care, 2025, pp 319-345 from Springer

Abstract: Abstract Modern precise clinical decision-making requires visual evidence which shows both body structures as well as functional systems simultaneously. The combination of CT with MRI along with PET and ultrasound imaging and their related variants produces a deeper assessment for diagnostics beyond the capabilities of standalone scans. Recent advancements in machine learning reveal how convolutional and encoder–decoder and transformer systems cooperate to create high-quality multichannel images that benefit cancer margin identification along with brain injury diagnosis and cardiovascular system evaluation and urgent treatment applications. Standardizing input data occurs through critical preprocessing stages which include registration, intensity normalization and noise suppression and these processes simultaneously operate with geometric, elastic and GAN‑based augmentations to handle data scarcity. The results of BraTS and CHAOS benchmark testing along with other benchmarks demonstrate increased success in identifying boundaries together with lesion recognition through evaluation measures IoU, Dice and Hausdorff distance. A comprehensive framework unites CNN, U‑Net, GAN along with Vision Transformer modules through systematic attention protocols and synthetic data creation pathways for achieving secure real-time medical navigation systems as well as precise radiotherapy tools. The benchmarking process demonstrated traditional wavelet methods yield improved Dice scores between 7 and 12% on average but radiologists approved of the system with better lesion boundary detection. Deep-learning-driven fusion builds its core position in next-generation diagnostic imaging based on the combination of survey analysis and proposed innovations.

Keywords: Multi-modality image fusion; Deep learning; Medical imaging; U-Net; Vision Transformer; Generative Adversarial Network (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:ssrchp:978-3-031-98728-1_16

Ordering information: This item can be ordered from
http://www.springer.com/9783031987281

DOI: 10.1007/978-3-031-98728-1_16

Access Statistics for this chapter

More chapters in Springer Series in Reliability Engineering from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-10-02
Handle: RePEc:spr:ssrchp:978-3-031-98728-1_16