EconPapers    
Economics at your fingertips  
 

Analyzing Diagnostic Reasoning of Vision–Language Models via Zero-Shot Chain-of-Thought Prompting in Medical Visual Question Answering

Fatema Tuj Johora Faria, Laith H. Baniata (), Ahyoung Choi () and Sangwoo Kang
Additional contact information
Fatema Tuj Johora Faria: Department of Computer Science and Engineering, Ahsanullah University of Science and Technology, Dhaka 1208, Bangladesh
Laith H. Baniata: School of Computing, Gachon University, Seongnam 13120, Republic of Korea
Ahyoung Choi: School of Computing, Gachon University, Seongnam 13120, Republic of Korea
Sangwoo Kang: School of Computing, Gachon University, Seongnam 13120, Republic of Korea

Mathematics, 2025, vol. 13, issue 14, 1-35

Abstract: Medical Visual Question Answering (MedVQA) lies at the intersection of computer vision, natural language processing, and clinical decision-making, aiming to generate accurate responses from medical images paired with complex inquiries. Despite recent advances in vision–language models (VLMs), their use in healthcare remains limited by a lack of interpretability and a tendency to produce direct, unexplainable outputs. This opacity undermines their reliability in medical settings, where transparency and justification are critically important. To address this limitation, we propose a zero-shot chain-of-thought prompting framework that guides VLMs to perform multi-step reasoning before arriving at an answer. By encouraging the model to break down the problem, analyze both visual and contextual cues, and construct a stepwise explanation, the approach makes the reasoning process explicit and clinically meaningful. We evaluate the framework on the PMC-VQA benchmark, which includes authentic radiological images and expert-level prompts. In a comparative analysis of three leading VLMs, Gemini 2.5 Pro achieved the highest accuracy (72.48%), followed by Claude 3.5 Sonnet (69.00%) and GPT-4o Mini (67.33%). The results demonstrate that chain-of-thought prompting significantly improves both reasoning transparency and performance in MedVQA tasks.

Keywords: medical visual question answering; vision–language models; chain-of-thought prompting; zero-shot learning; medical imaging; visual reasoning (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/13/14/2322/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/14/2322/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:14:p:2322-:d:1706591

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-07-22
Handle: RePEc:gam:jmathe:v:13:y:2025:i:14:p:2322-:d:1706591