Multimodal Prompt Learning in Emotion Recognition Using Context and Audio Information
Eunseo Jeong,
Gyunyeop Kim and
Sangwoo Kang ()
Additional contact information
Eunseo Jeong: School of Computing, Gachon University, Seongnam-si 13120, Republic of Korea
Gyunyeop Kim: School of Computing, Gachon University, Seongnam-si 13120, Republic of Korea
Sangwoo Kang: School of Computing, Gachon University, Seongnam-si 13120, Republic of Korea
Mathematics, 2023, vol. 11, issue 13, 1-13
Abstract:
Prompt learning has improved the performance of language models by reducing the gap in language model training methods of pre-training and downstream tasks. However, extending prompt learning in language models pre-trained with unimodal data to multimodal sources is difficult as it requires additional deep-learning layers that cannot be attached. In the natural-language emotion-recognition task, improved emotional classification can be expected when using audio and text to train a model rather than only natural-language text. Audio information, such as voice pitch, tone, and intonation, can give more information that is unavailable in text to predict emotions more effectively. Thus, using both audio and text can enable better emotion prediction in speech emotion-recognition models compared to semantic information alone. In this paper, in contrast to existing studies that use multimodal data with an additional layer, we propose a method for improving the performance of speech emotion recognition using multimodal prompt learning with text-based pre-trained models. The proposed method is using text and audio information in prompt learning by employing a language model pre-trained on natural-language text. In addition, we propose a method to improve the emotion-recognition performance of the current utterance using the emotion and contextual information of the previous utterances for prompt learning in speech emotion-recognition tasks. The performance of the proposed method was evaluated using the English multimodal dataset MELD and the Korean multimodal dataset KEMDy20. Experiments using both the proposed methods obtained an accuracy of 87.49%, F 1 score of 44.16, and weighted F 1 score of 86.28.
Keywords: multimodal; prompt learning; speech emotion recognition; audio processing; natural language processing (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/11/13/2908/pdf (application/pdf)
https://www.mdpi.com/2227-7390/11/13/2908/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:11:y:2023:i:13:p:2908-:d:1182172
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().