A comprehensive review of visual–textual sentiment analysis from social media networks
Israa Khalaf Salman Al-Tameemi (),
Mohammad-Reza Feizi-Derakhshi (),
Saeed Pashazadeh () and
Mohammad Asadpour ()
Additional contact information
Israa Khalaf Salman Al-Tameemi: University of Tabriz
Mohammad-Reza Feizi-Derakhshi: University of Tabriz
Saeed Pashazadeh: University of Tabriz
Mohammad Asadpour: University of Tabriz
Journal of Computational Social Science, 2024, vol. 7, issue 3, No 19, 2767-2838
Abstract:
Abstract Social media networks have become a significant aspect of people’s lives, serving as a platform for their ideas, opinions and emotions. Consequently, automated sentiment analysis (SA) is critical for recognising people’s feelings in ways other information sources cannot. The analysis of these feelings revealed various applications, including brand evaluations, YouTube film reviews and healthcare applications. As social media continues to develop, people publish vast quantities of information in various formats, like text, pictures, audio, and video. Thus, traditional SA algorithms have become limited, as they do not consider the expressiveness of other modalities. By including such characteristics from various material sources, these multimodal data streams provide new opportunities for optimising the expected results beyond text-based SA. Our study focuses on the forefront field of multimodal SA, which examines visual and textual data posted on social media networks. Many people are more likely to utilise this information to express themselves on these platforms. To serve as a resource for academics in this rapidly growing field, we introduce a comprehensive overview of textual and visual SA, including data pre-processing, feature extraction techniques, sentiment benchmark datasets, and the efficacy of multiple classification methodologies suited to each field. We also provide a brief introduction of the most frequently utilised data fusion strategies and a summary of existing research on visual–textual SA. Finally, we highlight the most significant challenges and investigate several important sentiment applications.
Keywords: Deep learning; Machine learning; Multimodal fusion; Sentiment analysis; Visual–textual sentiment classification (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s42001-024-00326-y Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:jcsosc:v:7:y:2024:i:3:d:10.1007_s42001-024-00326-y
Ordering information: This journal article can be ordered from
http://www.springer. ... iences/journal/42001
DOI: 10.1007/s42001-024-00326-y
Access Statistics for this article
Journal of Computational Social Science is currently edited by Takashi Kamihigashi
More articles in Journal of Computational Social Science from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().