EconPapers    
Economics at your fingertips  
 

Dimensional Music Emotion Recognition by Machine Learning

Junjie Bai, Lixiao Feng, Jun Peng, Jinliang Shi, Kan Luo, Zuojin Li, Lu Liao and Yingxu Wang
Additional contact information
Junjie Bai: School of Electrical and Information Engineering, Chongqing University of Science and Technology, Chongqing, China & School of Instrument Science and Engineering, Southeast University, Nanjing, China
Lixiao Feng: School of Electrical and Information Engineering, Chongqing University of Science and Technology, Chongqing, China
Jun Peng: School of Electrical and Information Engineering, Chongqing University of Science and Technology, Chongqing, China
Jinliang Shi: School of Electrical and Information Engineering, Chongqing University of Science and Technology, Chongqing, China
Kan Luo: School of Information Science and Engineering, Fujian University of Technology, Fuzhou, China
Zuojin Li: School of Electrical and Information Engineering, Chongqing University of Science and Technology, Chongqing, China
Lu Liao: School of Electrical and Information Engineering, Chongqing University of Science and Technology, Chongqing, China
Yingxu Wang: International Institute of Cognitive Informatics and Cognitive Computing (ICIC),Laboratory for Computational Intelligence, Denotational Mathematics, and Software Science, Department of Electrical and Computer Engineering, Schulich School of Engineering and Hotchkiss Brain Institute, University of Calgary, Calgary, Canada & Information Systems Lab, Stanford University, Stanford, CA, USA

International Journal of Cognitive Informatics and Natural Intelligence (IJCINI), 2016, vol. 10, issue 4, 74-89

Abstract: Music emotion recognition (MER) is a challenging field of studies that has been addressed in multiple disciplines such as cognitive science, physiology, psychology, musicology, and arts. In this paper, music emotions are modeled as a set of continuous variables composed of valence and arousal (VA) values based on the Valence-Arousal model. MER is formulated as a regression problem where 548 dimensions of music features were extracted and selected. A wide range of methods including multivariate adaptive regression spline, support vector regression (SVR), radial basis function, random forest regression (RFR), and regression neural networks are adopted to recognize music emotions. Experimental results show that these regression algorithms have led to good regression effect for MER. The optimal R2 statistics and VA values are 29.3% and 62.5%, respectively, which are obtained by the RFR and SVR algorithms in the relief feature space.

Date: 2016
References: Add references at CitEc
Citations:

Downloads: (external link)
http://services.igi-global.com/resolvedoi/resolve. ... 18/IJCINI.2016100104 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:igg:jcini0:v:10:y:2016:i:4:p:74-89

Access Statistics for this article

International Journal of Cognitive Informatics and Natural Intelligence (IJCINI) is currently edited by Kangshun Li

More articles in International Journal of Cognitive Informatics and Natural Intelligence (IJCINI) from IGI Global
Bibliographic data for series maintained by Journal Editor ().

 
Page updated 2025-03-19
Handle: RePEc:igg:jcini0:v:10:y:2016:i:4:p:74-89