Accelerating Extreme Search of Multidimensional Functions Based on Natural Gradient Descent with Dirichlet Distributions
Ruslan Abdulkadirov (),
Pavel Lyakhov and
Nikolay Nagornov
Additional contact information
Ruslan Abdulkadirov: North-Caucasus Center for Mathematical Research, North-Caucasus Federal University, 355009 Stavropol, Russia
Pavel Lyakhov: North-Caucasus Center for Mathematical Research, North-Caucasus Federal University, 355009 Stavropol, Russia
Nikolay Nagornov: Department of Mathematical Modeling, North-Caucasus Federal University, 355009 Stavropol, Russia
Mathematics, 2022, vol. 10, issue 19, 1-13
Abstract:
The high accuracy attainment, using less complex architectures of neural networks, remains one of the most important problems in machine learning. In many studies, increasing the quality of recognition and prediction is obtained by extending neural networks with usual or special neurons, which significantly increases the time of training. However, engaging an optimization algorithm, which gives us a value of the loss function in the neighborhood of global minimum, can reduce the number of layers and epochs. In this work, we explore the extreme searching of multidimensional functions by proposed natural gradient descent based on Dirichlet and generalized Dirichlet distributions. The natural gradient is based on describing a multidimensional surface with probability distributions, which allows us to reduce the change in the accuracy of gradient and step size. The proposed algorithm is equipped with step-size adaptation, which allows it to obtain higher accuracy, taking a small number of iterations in the process of minimization, compared with the usual gradient descent and adaptive moment estimate. We provide experiments on test functions in four- and three-dimensional spaces, where natural gradient descent proves its ability to converge in the neighborhood of global minimum. Such an approach can find its application in minimizing the loss function in various types of neural networks, such as convolution, recurrent, spiking and quantum networks.
Keywords: natural gradient descent; optimization; K–L divergence; Dirichlet distribution; generalized Dirichlet distribution (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
https://www.mdpi.com/2227-7390/10/19/3556/pdf (application/pdf)
https://www.mdpi.com/2227-7390/10/19/3556/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:10:y:2022:i:19:p:3556-:d:929180
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().