EconPapers    
Economics at your fingertips  
 

Conditional Image Synthesis Using Generative Diffusion Models: Application to Pathological Prostate MR Image Generation

Shaheer U. Saeed () and Yipeng Hu ()
Additional contact information
Shaheer U. Saeed: University College London
Yipeng Hu: University College London

Chapter Chapter 4 in Generative Machine Learning Models in Medical Image Computing, 2025, pp 65-82 from Springer

Abstract: Abstract In this work, we propose an image synthesis mechanism based on diffusion, which models the reversal of the sequential addition of noise to an image. We further develop conditioning mechanisms for this approach, such that image synthesis can be conditioned on information relevant to the clinical tasks-of-interest. We demonstrate the conditional synthesis capabilities of such models via an example application of multi-sequence prostate MR image synthesis, conditioned on text, to control lesion presence and sequence, and on images, to generate paired MR sequences e.g., generating diffusion-weighted MR from T2-weighted MR, which are two challenging tasks in pathological image synthesis. We validate our method using 2D image slices from real suspected prostate cancer patients. The realism of the synthetic images was validated through a blind evaluation by an expert radiologist, specialising in urological MR with 4 years of experience. The radiologist was able to distinguish between real and fake images with an accuracy of 59.4%, only slightly above the random chance of 50%. For the first time, we also evaluate the realism of the generated pathology by blind expert identification of the presence of suspected lesions. We find that the clinician performs similarly for both real and synthesised images, with a 2.9 percentage point difference in lesion identification accuracy between real and synthesised images, demonstrating the potentials for radiological training. Additionally, we demonstrated that a machine learning model trained for lesion identification exhibited improved performance (76.2% vs 70.4%, a statistically significant increase) when augmented with synthesised data compared to training solely on real images, highlighting the utility of synthesised images in enhancing model performance.

Date: 2025
References: Add references at CitEc
Citations:

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:sprchp:978-3-031-80965-1_4

Ordering information: This item can be ordered from
http://www.springer.com/9783031809651

DOI: 10.1007/978-3-031-80965-1_4

Access Statistics for this chapter

More chapters in Springer Books from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-11-21
Handle: RePEc:spr:sprchp:978-3-031-80965-1_4