EconPapers    
Economics at your fingertips  
 

Training Multilayer Neural Network Based on Optimal Control Theory for Limited Computational Resources

Ali Najem Alkawaz, Jeevan Kanesan (), Anis Salwa Mohd Khairuddin, Irfan Anjum Badruddin (), Sarfaraz Kamangar, Mohamed Hussien, Maughal Ahmed Ali Baig and N. Ameer Ahammad
Additional contact information
Ali Najem Alkawaz: Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia
Jeevan Kanesan: Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia
Anis Salwa Mohd Khairuddin: Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia
Irfan Anjum Badruddin: Mechanical Engineering Department, College of Engineering, King Khalid University, Abha 61421, Saudi Arabia
Sarfaraz Kamangar: Mechanical Engineering Department, College of Engineering, King Khalid University, Abha 61421, Saudi Arabia
Mohamed Hussien: Department of Chemistry, Faculty of Science, King Khalid University, P.O. Box 9004, Abha 61413, Saudi Arabia
Maughal Ahmed Ali Baig: Department of Mechanical Engineering, CMR Technical Campus, Kandlakoya, Medchal Road, Hyderabad 501401, India
N. Ameer Ahammad: Department of Mathematics, Faculty of Science, University of Tabuk, Tabuk 71491, Saudi Arabia

Mathematics, 2023, vol. 11, issue 3, 1-15

Abstract: Backpropagation (BP)-based gradient descent is the general approach to train a neural network with a multilayer perceptron. However, BP is inherently slow in learning, and it sometimes traps at local minima, mainly due to a constant learning rate. This pre-fixed learning rate regularly leads the BP network towards an unsuccessful stochastic steepest descent. Therefore, to overcome the limitation of BP, this work addresses an improved method of training the neural network based on optimal control (OC) theory. State equations in optimal control represent the BP neural network’s weights and biases. Meanwhile, the learning rate is treated as the input control that adapts during the neural training process. The effectiveness of the proposed algorithm is evaluated on several logic gates models such as XOR, AND, and OR, as well as the full adder model. Simulation results demonstrate that the proposed algorithm outperforms the conventional method in terms of improved accuracy in output with a shorter time in training. The training via OC also reduces the local minima trap. The proposed algorithm is almost 40% faster than the steepest descent method, with a marginally improved accuracy of approximately 60%. Consequently, the proposed algorithm is suitable to be applied on devices with limited computation resources, since the proposed algorithm is less complex, thus lowering the circuit’s power consumption.

Keywords: multilayer neural network; optimal control; Pontryagin minimum principle; backpropagation; logic gates (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)

Downloads: (external link)
https://www.mdpi.com/2227-7390/11/3/778/pdf (application/pdf)
https://www.mdpi.com/2227-7390/11/3/778/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:11:y:2023:i:3:p:778-:d:1056869

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:11:y:2023:i:3:p:778-:d:1056869