An Automatic Mechanism to Recognize and Generate Emotional MIDI Sound Arts Based on Affective Computing Techniques
Hao-Chiang Koong Lin,
Cong Jie Sun,
Bei Ni Su and
Zu An Lin
Additional contact information
Hao-Chiang Koong Lin: National University of Tainan, Tainan, Taiwan
Cong Jie Sun: National Taiwan Normal University, Taipei, Taiwan
Bei Ni Su: National University of Tainan, Tainan, Taiwan
Zu An Lin: National University of Tainan, Tainan, Taiwan
International Journal of Online Pedagogy and Course Design (IJOPCD), 2013, vol. 3, issue 3, 62-75
Abstract:
All kinds of arts have the chance to be represented in digital forms, and one of them is the sound art, including ballads by word of mouth, classical music, religious music, popular music and emerging computer music. Recently, affective computing has drowned a lot of attention in the academic field, and it has two parts: physiology and psychology. Through a variety of sensing devices, the authors can get behaviors which are represented by feelings and emotions. Therefore, the authors may not only identify but also understand human emotions. This work focuses on exploring and producing the MAX/MSP computer program which can generate the emotional music automatically. It can also recognize the emotion identified when users play MIDI instruments and create visual effects. The authors hope to achieve two major goals: (1) Producing the performance of art combined with dynamic vision and auditory tune. (2) Making computers understand human emotions and interact with music by affective computing. The results of this study are as follows:(1) The authors design a corresponding mechanism of music tone and human emotion recognition. (2) The authors develop a combination of affective computing and the auto music generator. (3) The authors design a music system which can be used with MIDI instrument and also be incorporated with other music effects to add the Musicality. (4) The authors Assess and complete the emotion discrimination mechanism of how mood music can feedback accurately. The authors make computers simulate (even have) human emotion, and obtain relevant basis for more accurate sound feedback. The authors use System Usability Scale to analyze and discuss about the usability of the system. Also, the average score of each item is obviously higher than the simple score (four points) for the overall response and the performance of music when we use “auto mood music generator”. There are average performance which is more than five points in each part of Interaction and Satisfaction Scale. Subjects are willing to accept this interactive work, so it proves that the work has the usability and the potential which the authors can keep developing on.
Date: 2013
References: Add references at CitEc
Citations:
Downloads: (external link)
https://services.igi-global.com/resolvedoi/resolve ... 18/ijopcd.2013070104 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:igg:jopcd0:v:3:y:2013:i:3:p:62-75
Access Statistics for this article
International Journal of Online Pedagogy and Course Design (IJOPCD) is currently edited by Chia-Wen Tsai
More articles in International Journal of Online Pedagogy and Course Design (IJOPCD) from IGI Global
Bibliographic data for series maintained by Journal Editor ().