EconPapers    
Economics at your fingertips  
 

OFPI: Optical Flow Pose Image for Action Recognition

Dong Chen, Tao Zhang, Peng Zhou, Chenyang Yan and Chuanqi Li ()
Additional contact information
Dong Chen: College of Computer Science and Engineering, Guangxi Normal University, Guilin 541004, China
Tao Zhang: College of Physics and Electronic Engineering, Nanning Normal University, Nanning 530001, China
Peng Zhou: College of Computer Science and Engineering, Guangxi Normal University, Guilin 541004, China
Chenyang Yan: Division of Electrical Engineering and Computer Science, Kanazawa University, Kakuma-machi, Kanazawa 920-1192, Japan
Chuanqi Li: College of Computer Science and Engineering, Guangxi Normal University, Guilin 541004, China

Mathematics, 2023, vol. 11, issue 6, 1-23

Abstract: Most approaches to action recognition based on pseudo-images involve encoding skeletal data into RGB-like image representations. This approach cannot fully exploit the kinematic features and structural information of human poses, and convolutional neural network (CNN) models that process pseudo-images lack a global field of view and cannot completely extract action features from pseudo-images. In this paper, we propose a novel pose-based action representation method called Optical Flow Pose Image (OFPI) in order to fully capitalize on the spatial and temporal information of skeletal data. Specifically, in the proposed method, an advanced pose estimator collects skeletal data before locating the target person and then extracts skeletal data utilizing a human tracking algorithm. The OFPI representation is obtained by aggregating these skeletal data over time. To test the superiority of OFPI and investigate the significance of the model having a global field of view, we trained a simple CNN model and a transformer-based model, respectively. Both models achieved superior outcomes. Because of the global field of view, especially in the transformer-based model, the OFPI-based representation achieved 98.3% and 94.2% accuracy on the KTH and JHMDB datasets, respectively. Compared with other advanced pose representation methods and multi-stream methods, OFPI achieved state-of-the-art performance on the JHMDB dataset, indicating the utility and potential of this algorithm for skeleton-based action recognition research.

Keywords: action recognition; optical flow pose image; skeletal data; transformer (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
https://www.mdpi.com/2227-7390/11/6/1451/pdf (application/pdf)
https://www.mdpi.com/2227-7390/11/6/1451/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:11:y:2023:i:6:p:1451-:d:1099719

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:11:y:2023:i:6:p:1451-:d:1099719