Multi-Output Learning Based on Multimodal GCN and Co-Attention for Image Aesthetics and Emotion Analysis
Haotian Miao,
Yifei Zhang,
Daling Wang and
Shi Feng
Additional contact information
Haotian Miao: School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China
Yifei Zhang: School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China
Daling Wang: School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China
Shi Feng: School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China
Mathematics, 2021, vol. 9, issue 12, 1-17
Abstract:
With the development of social networks and intelligent terminals, it is becoming more convenient to share and acquire images. The massive growth of the number of social images makes people have higher demands for automatic image processing, especially in the aesthetic and emotional perspective. Both aesthetics assessment and emotion recognition require a higher ability for the computer to simulate high-level visual perception understanding, which belongs to the field of image processing and pattern recognition. However, existing methods often ignore the prior knowledge of images and intrinsic relationships between aesthetic and emotional perspectives. Recently, machine learning and deep learning have become powerful methods for researchers to solve mathematical problems in computing, such as image processing and pattern recognition. Both images and abstract concepts can be converted into numerical matrices and then establish the mapping relations using mathematics on computers. In this work, we propose an end-to-end multi-output deep learning model based on multimodal Graph Convolutional Network (GCN) and co-attention for aesthetic and emotion conjoint analysis. In our model, a stacked multimodal GCN network is proposed to encode the features under the guidance of the correlation matrix, and a co-attention module is designed to help the aesthetics and emotion feature representation learn from each other interactively. Experimental results indicate that our proposed model achieves competitive performance on the IAE dataset. Progressive results on the AVA and ArtPhoto datasets also prove the generalization ability of our model.
Keywords: image aesthetics and emotion analysis; multi-out learning; image semantic recognition; multimodal learning; Graph Convolutional Network; co-attention; intermediate supervision (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2021
References: View complete reference list from CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
https://www.mdpi.com/2227-7390/9/12/1437/pdf (application/pdf)
https://www.mdpi.com/2227-7390/9/12/1437/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:9:y:2021:i:12:p:1437-:d:578457
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().