TinyAct: A framework for real-time action recognition in the cloud through distillation learning
Yupaporn Wanna,
Kannika Wiratchawa and
Thanapong Intharah
PLOS ONE, 2026, vol. 21, issue 4, 1-21
Abstract:
Human action recognition has become increasingly important for applications in security surveillance, healthcare monitoring, and smart environments. However, existing deep learning models typically require substantial computational resources, making deployment on resource-constrained edge devices challenging. To address this limitation, we propose TinyAct, a lightweight framework for real-time human action recognition that combines edge computing with cloud-based processing through knowledge distillation. TinyAct employs a 3D video autoencoder to extract compact spatiotemporal features from video sequences, coupled with classical machine learning classifiers for action prediction. The framework utilizes an AIoT (Artificial Intelligence of Things) architecture where feature extraction occurs on edge devices while classification is performed in the cloud, enabling real-time processing with reduced bandwidth requirements. To enhance performance, we implement knowledge distillation using the ILA-ViT-B/16 transformer as a teacher model to transfer temporal knowledge to our compact student architecture. Our experiments on the Kinetics-400 dataset demonstrate that TinyAct achieves competitive performance while maintaining computational efficiency. Using 16-frame video clips with 1024-dimensional latent features, Random Forest achieved the highest baseline accuracy of 57.00%, followed by SVM (55.00%) and XGBoost (54.00%). The autoencoder-based feature extraction significantly reduces computational overhead compared to end-to-end deep learning approaches while preserving essential spatiotemporal information for accurate action recognition. The knowledge distillation experiments reveal that training configuration critically affects performance, with non-pretrained student models achieving better results (15.11% with SVM) than pretrained ones under teacher supervision. This suggests that joint optimization of the encoder and classifier is essential for effective knowledge transfer in resource-constrained settings. TinyAct’s modular architecture enables flexible deployment across diverse hardware configurations, supporting both lightweight edge inference and cloud-based training pipelines. The framework demonstrates that effective human action recognition can be achieved without computationally intensive deep networks, making it suitable for smart surveillance systems, IoT applications, and embedded devices where computational resources are limited.
Date: 2026
References: Add references at CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0347245 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 47245&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0347245
DOI: 10.1371/journal.pone.0347245
Access Statistics for this article
More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().