EconPapers    
Economics at your fingertips  
 

Automatic Compression of Neural Network with Deep Reinforcement Learning Based on Proximal Gradient Method

Mingyi Wang, Jianhao Tang, Haoli Zhao (), Zhenni Li and Shengli Xie
Additional contact information
Mingyi Wang: School of Automation, Guangdong University of Technology, Guangzhou 510006, China
Jianhao Tang: School of Automation, Guangdong University of Technology, Guangzhou 510006, China
Haoli Zhao: School of Automation, Guangdong University of Technology, Guangzhou 510006, China
Zhenni Li: School of Automation, Guangdong University of Technology, Guangzhou 510006, China
Shengli Xie: Key Laboratory of Intelligent Detection and The Internet of Things in Manufacturing, Guangzhou 510006, China

Mathematics, 2023, vol. 11, issue 2, 1-19

Abstract: In recent years, the model compression technique is very effective for deep neural network compression. However, many existing model compression methods rely heavily on human experience to explore a compression strategy between network structure, speed, and accuracy, which is usually suboptimal and time-consuming. In this paper, we propose a framework for automatically compressing models through the actor–critic structured deep reinforcement learning (DRL) which interacts with each layer in the neural network, where the actor network determines the compression strategy and the critic network ensures the decision accuracy of the actor network through predicted values, thus improving the compression quality of the network. To enhance the prediction performance of the critic network, we impose the L 1 norm regularizer on the weights of the critic network to obtain a distinct activation output feature on the representation, thus enhancing the prediction accuracy of the critic network. Moreover, to improve the decision performance of the actor network, we impose the L 1 norm regularizer on the weights of the actor network to improve the decision accuracy of the actor network by removing the redundant weights in the actor network. Furthermore, to improve the training efficiency, we use the proximal gradient method to optimize the weights of the actor network and the critic network, which can obtain an effective weight solution and thus improve the compression performance. In the experiment, in MNIST datasets, the proposed method has only a 0.2% loss of accuracy when compressing more than 70% of neurons. Similarly, in CIFAR-10 datasets, the proposed method compresses more than 60% of neurons, with only 7.1% accuracy loss, which is superior to other existing methods. In terms of efficiency, the proposed method also cost the lowest time among the existing methods.

Keywords: automatic compression; proximal gradient; network compression; structured pruning (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/11/2/338/pdf (application/pdf)
https://www.mdpi.com/2227-7390/11/2/338/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:11:y:2023:i:2:p:338-:d:1029391

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:11:y:2023:i:2:p:338-:d:1029391