EconPapers    
Economics at your fingertips  
 

FTN–VQA: MULTIMODAL REASONING BY LEVERAGING A FULLY TRANSFORMER-BASED NETWORK FOR VISUAL QUESTION ANSWERING

Runmin Wang (), Weixiang Xu, Yanbin Zhu, Zhenlin Zhu, Hua Chen, Yajun Ding, Jinping Liu, Changxin Gao and Nong Sang
Additional contact information
Runmin Wang: Institute of Information Science and Engineering, Hunan Normal University, Changsha 410081, P. R. China
Weixiang Xu: Institute of Information Science and Engineering, Hunan Normal University, Changsha 410081, P. R. China
Yanbin Zhu: Institute of Information Science and Engineering, Hunan Normal University, Changsha 410081, P. R. China
Zhenlin Zhu: Institute of Information Science and Engineering, Hunan Normal University, Changsha 410081, P. R. China
Hua Chen: Institute of Information Science and Engineering, Hunan Normal University, Changsha 410081, P. R. China
Yajun Ding: Institute of Information Science and Engineering, Hunan Normal University, Changsha 410081, P. R. China
Jinping Liu: Institute of Information Science and Engineering, Hunan Normal University, Changsha 410081, P. R. China
Changxin Gao: ��School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, P. R. China
Nong Sang: ��School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, P. R. China

FRACTALS (fractals), 2023, vol. 31, issue 06, 1-17

Abstract: Visual Question Answering (VQA) is a multimodal task, which requires understanding the information in the natural language questions and paying attention to the useful information in the images. So far, the solution of VQA tasks can be divided into grid-based methods and bottom-up-based methods. The grid-based method directly extracts the semantic features of the image by leveraging the convolution neural network (CNN), so it has a praiseworthy computational efficiency, but the global convolution feature ignores the information of the key area and causes the performance bottleneck. The bottom-up-based method needs to detect potentially problem-related objects by using some object detection frameworks, e.g. Faster RCNN, so it has better performance, but the computational efficiency is reduced due to the computation of Region Proposal Network (RPN) and Non-Maximum Suppression (NMS). Based on the aforementioned reasons, we propose a fully transformer-based network (FTN) that can maintain a balance between computational efficiency and accuracy, which can be trained end-to-end and consists of three components: question module, image module, and fusion module. Meanwhile, the image module and the question module are visualized to explore the operating rules of the transformer. The experiment results demonstrate that the FTN can focus on key information and objects in the question module and the image module, and our single model has reached 69.01% accuracy on the VQA2.0 dataset, which is superior to the grid-based methods. Although FTN fails to surpass a few state-of-the-art bottom-up-based methods, the FTN has obvious advantages in computational efficiency. The code will be released at https://github.com/weixiang-xu/FTN-VQA.git.

Keywords: VQA; Transformer; Attention Mechanism; Multimodal Reasoning (search for similar items in EconPapers)
Date: 2023
References: Add references at CitEc
Citations:

Downloads: (external link)
http://www.worldscientific.com/doi/abs/10.1142/S0218348X23401333
Access to full text is restricted to subscribers

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:wsi:fracta:v:31:y:2023:i:06:n:s0218348x23401333

Ordering information: This journal article can be ordered from

DOI: 10.1142/S0218348X23401333

Access Statistics for this article

FRACTALS (fractals) is currently edited by Tara Taylor

More articles in FRACTALS (fractals) from World Scientific Publishing Co. Pte. Ltd.
Bibliographic data for series maintained by Tai Tone Lim ().

 
Page updated 2025-03-20
Handle: RePEc:wsi:fracta:v:31:y:2023:i:06:n:s0218348x23401333