EconPapers    
Economics at your fingertips  
 

Lightweight and Elegant Data Reduction Strategies for Training Acceleration of Convolutional Neural Networks

Alexander Demidovskij (), Artyom Tugaryov, Aleksei Trutnev, Marina Kazyulina, Igor Salnikov and Stanislav Pavlov
Additional contact information
Alexander Demidovskij: NN AI Team, Huawei Russian Research Institute, ul. Maksima Gorkogo, 117, Nizhny Novgorod 603006, Russia
Artyom Tugaryov: NN AI Team, Huawei Russian Research Institute, ul. Maksima Gorkogo, 117, Nizhny Novgorod 603006, Russia
Aleksei Trutnev: NN AI Team, Huawei Russian Research Institute, ul. Maksima Gorkogo, 117, Nizhny Novgorod 603006, Russia
Marina Kazyulina: NN AI Team, Huawei Russian Research Institute, ul. Maksima Gorkogo, 117, Nizhny Novgorod 603006, Russia
Igor Salnikov: NN AI Team, Huawei Russian Research Institute, ul. Maksima Gorkogo, 117, Nizhny Novgorod 603006, Russia
Stanislav Pavlov: NN AI Team, Huawei Russian Research Institute, ul. Maksima Gorkogo, 117, Nizhny Novgorod 603006, Russia

Mathematics, 2023, vol. 11, issue 14, 1-25

Abstract: Due to industrial demands to handle increasing amounts of training data, lower the cost of computing one model at a time, and lessen the ecological effects of intensive computing resource consumption, the job of speeding the training of deep neural networks becomes exceedingly challenging. Adaptive Online Importance Sampling and IDS are two brand-new methods for accelerating training that are presented in this research. On the one hand, Adaptive Online Importance Sampling accelerates neural network training by lowering the number of forward and backward steps depending on how poorly a model can identify a given data sample. On the other hand, Intellectual Data Selection accelerates training by removing semantic redundancies from the training dataset and subsequently lowering the number of training steps. The study reports average 1.9x training acceleration for ResNet50, ResNet18, MobileNet v2 and YOLO v5 on a variety of datasets: CIFAR-100, CIFAR-10, ImageNet 2012 and MS COCO 2017, where training data are reduced by up to five times. Application of Adaptive Online Importance Sampling to ResNet50 training on ImageNet 2012 results in 2.37 times quicker convergence to 71.7% top-1 accuracy, which is within 5% of the baseline. Total training time for the same number of epochs as the baseline is reduced by 1.82 times, with an accuracy drop of 2.45 p.p. The amount of time required to apply Intellectual Data Selection to ResNet50 training on ImageNet 2012 is decreased by 1.27 times with a corresponding decline in accuracy of 1.12 p.p. Applying both methods to ResNet50 training on ImageNet 2012 results in 2.31 speedup with an accuracy drop of 3.5 p.p.

Keywords: deep learning training; training acceleration; convolutional neural networks; sample importance; dataset reduction (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/11/14/3120/pdf (application/pdf)
https://www.mdpi.com/2227-7390/11/14/3120/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:11:y:2023:i:14:p:3120-:d:1194384

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:11:y:2023:i:14:p:3120-:d:1194384