From words to visuals: a transformer-based multi-modal framework for emotion-driven tourism analytics
Víctor Calderón-Fajardo (),
Ignacio Rodríguez-Rodríguez () and
Miguel Puig-Cabrera ()
Additional contact information
Víctor Calderón-Fajardo: University of Malaga
Ignacio Rodríguez-Rodríguez: University of Malaga
Miguel Puig-Cabrera: University of Algarve
Information Technology & Tourism, 2025, vol. 27, issue 4, No 3, 939-979
Abstract:
Abstract Traditional tourism analytics have primarily relied on isolated sentiment analysis and image processing techniques, often failing to capture the subtle interaction between textual expressions and visual aesthetics inherent in tourist experiences. This study addresses these limitations by proposing a novel multi-modal framework that transforms textual reviews into AI-generated images using standardized prompts, thereby converting affective signals into explicit visual features. Leveraging state-of-the-art models—such as Distilled Bidirectional Encoder Representations from Transformers (DistilBERT) for fine-grained emotion recognition and Contrastive Language–Image Pre‑training (CLIP) for semantic extraction of visual attributes—our approach maps complex sentiments onto interpretable visual characteristics, integrating explainable features to uncover the underlying structure in tourist perceptions. This approach enhances classification performance and provides a transparent mechanism for understanding how distinct emotional states correspond to specific visual cues. Experimental evaluations on a dataset encompassing four diverse tourist destinations—Berlin, Dublin, Cairo, and Málaga—demonstrate high classification accuracy and robust correlations between text-derived emotions and image-based features, close to more powerful embedding methods. Significant correlations were observed between emotions and visual features, e.g., brightness and contentment, as well as between entropy and shame, indicating that our method efficiently captures the affective resonance between visual and textual modalities. Our findings underscore the transformative potential of converting textual sentiment into visual representations to facilitate more accurate, interpretable, and actionable analytics in the tourism sector. This framework suggests promising avenues for dynamic destination characterization, informed marketing strategies, and enhanced urban planning initiatives, laying the foundation for future advancements in multi-modal tourism analytics.
Keywords: Multimodal tourism analytics; Transformer models; Text-to-Image generation; Affective sentiment analysis; Explainable AI; Destination classification (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s40558-025-00334-2 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:infott:v:27:y:2025:i:4:d:10.1007_s40558-025-00334-2
Ordering information: This journal article can be ordered from
http://www.springer. ... ystems/journal/40558
DOI: 10.1007/s40558-025-00334-2
Access Statistics for this article
Information Technology & Tourism is currently edited by Zheng Xiang
More articles in Information Technology & Tourism from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().