EconPapers    
Economics at your fingertips  
 

A Channel-Wise Spatial-Temporal Aggregation Network for Action Recognition

Huafeng Wang, Tao Xia, Hanlin Li, Xianfeng Gu, Weifeng Lv and Yuehai Wang
Additional contact information
Huafeng Wang: School of Information Technology, North China University of Technology, Beijing 100144, China
Tao Xia: School of Software, Beihang University, Beijing 100191, China
Hanlin Li: School of Information Technology, North China University of Technology, Beijing 100144, China
Xianfeng Gu: Department of Computer Science, State University of New York at Stony Brook, New York, NY 11794, USA
Weifeng Lv: School of Software, Beihang University, Beijing 100191, China
Yuehai Wang: School of Information Technology, North China University of Technology, Beijing 100144, China

Mathematics, 2021, vol. 9, issue 24, 1-17

Abstract: A very challenging task for action recognition concerns how to effectively extract and utilize the temporal and spatial information of video (especially temporal information). To date, many researchers have proposed various spatial-temporal convolution structures. Despite their success, most models are limited in further performance especially on those datasets that are highly time-dependent due to their failure to identify the fusion relationship between the spatial and temporal features inside the convolution channel. In this paper, we proposed a lightweight and efficient spatial-temporal extractor, denoted as Channel-Wise Spatial-Temporal Aggregation block (CSTA block), which could be flexibly plugged in existing 2D CNNs (denoted by CSTANet). The CSTA Block utilizes two branches to model spatial-temporal information separately. In temporal branch, It is equipped with a Motion Attention Module (MA), which is used to enhance the motion regions in a given video. Then, we introduced a Spatial-Temporal Channel Attention (STCA) module, which could aggregate spatial-temporal features of each block channel-wisely in a self-adaptive and trainable way. The final experimental results demonstrate that the proposed CSTANet achieved the state-of-the-art results on EGTEA Gaze++ and Diving48 datasets, and obtained competitive results on Something-Something V1&V2 at the less computational cost.

Keywords: action recognition; channel-wise; spatial-temporal; video (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2021
References: View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
https://www.mdpi.com/2227-7390/9/24/3226/pdf (application/pdf)
https://www.mdpi.com/2227-7390/9/24/3226/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:9:y:2021:i:24:p:3226-:d:701772

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:9:y:2021:i:24:p:3226-:d:701772