EconPapers    
Economics at your fingertips  
 

P_VggNet: A convolutional neural network (CNN) with pixel-based attention map

Kunhua Liu, Peisi Zhong, Yi Zheng, Kaige Yang and Mei Liu

PLOS ONE, 2018, vol. 13, issue 12, 1-11

Abstract: Attention maps have been fused in the VggNet structure (EAC-Net) [1] and have shown significant improvement compared to that of the VggNet structure. However, in [1], E-Net was designed based on the facial action unit (AU) center and for facial AU detection only. Thus, for the use of attention maps in every image type, this paper proposed a new convolutional neural network (CNN) structure, P_VggNet, comprising the following parts: P_Net and VggNet with 16 layers (VggNet-16). The generation approach of P_Net was designed, and the P_VggNet structure was proposed. To prove the efficiency of P_VggNet, we designed two experiments, which indicated that P_VggNet could more efficiently extract image features than VggNet-16.

Date: 2018
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0208497 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 08497&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0208497

DOI: 10.1371/journal.pone.0208497

Access Statistics for this article

More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().

 
Page updated 2025-03-29
Handle: RePEc:plo:pone00:0208497