Multimodal Emotion Recognition in Learning Environments
Ramón Zatarain Cabada,
Héctor Manuel Cárdenas López and
Hugo Jair Escalante
Additional contact information
Ramón Zatarain Cabada: Instituto Tecnológico de Culiacán
Héctor Manuel Cárdenas López: Instituto Tecnológico de Culiacán
Hugo Jair Escalante: Instituto Nacional de Astrofísica
Chapter Chapter 11 in Multimodal Affective Computing, 2023, pp 123-147 from Springer
Abstract:
Abstract This chapter discusses the challenges of integrating multimodal emotion and sentiment classification for affect analysis, including common architectural models, probability vector interpretation, and data pipeline implementations. Additionally, the chapter presents affective tutoring agents that have been developed using multimodal recognition. The main objective is to introduce the reader to the challenges and opportunities presented by the use of multiple modalities in creating affective tutoring systems, with a focus on data pipelines, inference models, and the interactions between emotion and sentiment.
Date: 2023
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:sprchp:978-3-031-32542-7_11
Ordering information: This item can be ordered from
http://www.springer.com/9783031325427
DOI: 10.1007/978-3-031-32542-7_11
Access Statistics for this chapter
More chapters in Springer Books from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().