Finding the Optimal Topology of an Approximating Neural Network
Kostadin Yotov,
Emil Hadzhikolev,
Stanka Hadzhikoleva () and
Stoyan Cheresharov
Additional contact information
Kostadin Yotov: Faculty of Mathematics and Informatics, University of Plovdiv Paisii Hilendarski, 236 Bulgaria Blvd., 4027 Plovdiv, Bulgaria
Emil Hadzhikolev: Faculty of Mathematics and Informatics, University of Plovdiv Paisii Hilendarski, 236 Bulgaria Blvd., 4027 Plovdiv, Bulgaria
Stanka Hadzhikoleva: Faculty of Mathematics and Informatics, University of Plovdiv Paisii Hilendarski, 236 Bulgaria Blvd., 4027 Plovdiv, Bulgaria
Stoyan Cheresharov: Faculty of Mathematics and Informatics, University of Plovdiv Paisii Hilendarski, 236 Bulgaria Blvd., 4027 Plovdiv, Bulgaria
Mathematics, 2023, vol. 11, issue 1, 1-18
Abstract:
A large number of researchers spend a lot of time searching for the most efficient neural network to solve a given problem. The procedure of configuration, training, testing, and comparison for expected performance is applied to each experimental neural network. The configuration parameters—training methods, transfer functions, number of hidden layers, number of neurons, number of epochs, and tolerable error—have multiple possible values. Setting guidelines for appropriate parameter values would shorten the time required to create an efficient neural network, facilitate researchers, and provide a tool to improve the performance of automated neural network search methods. The task considered in this paper is related to the determination of upper bounds for the number of hidden layers and the number of neurons in them for approximating artificial neural networks trained with algorithms using the Jacobi matrix in the error function. The derived formulas for the upper limits of the number of hidden layers and the number of neurons in them are proved theoretically, and the presented experiments confirm their validity. They show that the search for an efficient neural network can focus below certain upper bounds, and above them, it becomes pointless. The formulas provide researchers with a useful auxiliary tool in the search for efficient neural networks with optimal topology. They are applicable to neural networks trained with methods such as Levenberg–Marquardt, Gauss–Newton, Bayesian regularization, scaled conjugate gradient, BFGS quasi-Newton, etc., which use the Jacobi matrix.
Keywords: neural network topology; neural network architecture; number of layers in ANN; number of neurons (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/11/1/217/pdf (application/pdf)
https://www.mdpi.com/2227-7390/11/1/217/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:11:y:2023:i:1:p:217-:d:1022114
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().