EconPapers    
Economics at your fingertips  
 

An Improved Reacceleration Optimization Algorithm Based on the Momentum Method for Image Recognition

Haijing Sun, Ying Cai, Ran Tao, Yichuan Shao (), Lei Xing, Can Zhang and Qian Zhao
Additional contact information
Haijing Sun: School of Intelligent Science and Engineering, Shenyang University, Shenyang 110044, China
Ying Cai: School of Information Engineering, Shenyang University, Shenyang 110044, China
Ran Tao: Shanghai Maruka Computer Information Technology Co., Ltd., Shanghai 200052, China
Yichuan Shao: School of Intelligent Science and Engineering, Shenyang University, Shenyang 110044, China
Lei Xing: School of Chemistry and Chemical Engineering, University of Surrey, Surrey GU2 7XH, UK
Can Zhang: School of Information Engineering, Shenyang University, Shenyang 110044, China
Qian Zhao: School of Science, Shenyang University of Technology, Shenyang 110044, China

Mathematics, 2024, vol. 12, issue 11, 1-15

Abstract: The optimization algorithm plays a crucial role in image recognition by neural networks. However, it is challenging to accelerate the model’s convergence and maintain high precision. As a commonly used stochastic gradient descent optimization algorithm, the momentum method requires many epochs to find the optimal parameters during model training. The velocity of its gradient descent depends solely on the historical gradients and is not subject to random fluctuations. To address this issue, an optimization algorithm to enhance the gradient descent velocity, i.e., the momentum reacceleration gradient descent (MRGD), is proposed. The algorithm utilizes the point division of the current momentum and the gradient relationship, multiplying it with the gradient. It can adjust the update rate and step size of the parameters based on the gradient descent state, so as to achieve faster convergence and higher precision in training the deep learning model. The effectiveness of this method is further proven by applying the reacceleration mechanism to the Adam optimizer, resulting in the MRGDAdam algorithm. We verify both algorithms using multiple image classification datasets, and the experimental results show that the proposed optimization algorithm enables the model to achieve higher recognition accuracy over a small number of training epochs, as well as speeding up model implementation. This study provides new ideas and expansions for future optimizer research.

Keywords: momentum acceleration; optimization algorithm; deep learning; image recognition; gradient descent algorithm (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/12/11/1759/pdf (application/pdf)
https://www.mdpi.com/2227-7390/12/11/1759/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:12:y:2024:i:11:p:1759-:d:1409364

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:12:y:2024:i:11:p:1759-:d:1409364