Fractional Tikhonov regularization to improve the performance of extreme learning machines
Shraddha M. Naik,
Ravi Prasad K. Jagannath and
Venkatanareshbabu Kuppili
Physica A: Statistical Mechanics and its Applications, 2020, vol. 551, issue C
Abstract:
Extreme learning machine (ELM) is a single-hidden-layer feed-forward neural network in which the input weights linking the input layer to the hidden layer are randomly chosen. The output weights which link the hidden layer to the output layer are analytically determined by solving a linear system of equations and hence is one of the fastest learning algorithms. The Moore–Penrose (MP) generalized inverse is normally employed to obtain the output weights of the neural network. Although the random weight parameters between input and hidden layers are need not be tuned, ELM provides good generalization performance with fast learning speed. In general, the data sets from real-world problems tend to make the linear system of ELM ill-conditioned due to the presence of inconsistent noise levels in the input data which leads to unreliable solutions and over-fitting problems. The regularization techniques are developed to address such issues in ELM and it involves estimation of additional variables termed as a regularization parameter. In this context, the proper selection of the regularization parameter is a crucial task as it is going to decide the quality of the solution obtained from the linear system. Further, the popular choice is the Tikhonov regularization technique which penalizes the ℓ2-norm of the model parameters. In ELM, such inclusion results are giving equal weight to singular values of the matrix irrespective of the noise level present in the data. In the presented work, a fractional framework is introduced in the Tikhonov regularized ELM to weigh the singular values with respect to a fractional parameter to reduce the effect of different noise levels. Moreover, an automated golden-section method is applied to choose the optimal fractional parameter. Finally, the generalized cross-validation method is applied for obtaining the suitable value of the regularization parameter. The proposed strategy of applying fractional Tikhonov regularization to ELM results in improvement of performance when compared with the conventional methods with respect to the performance measures. Finally, the results obtained from the proposed fractional regularization is also shown to be statistically significant.
Keywords: Classification; Extreme learning machine; Regularization; Tikhonov regularization (search for similar items in EconPapers)
Date: 2020
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0378437119322319
Full text for ScienceDirect subscribers only. Journal offers the option of making the article available online on Science direct for a fee of $3,000
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:phsmap:v:551:y:2020:i:c:s0378437119322319
DOI: 10.1016/j.physa.2019.124034
Access Statistics for this article
Physica A: Statistical Mechanics and its Applications is currently edited by K. A. Dawson, J. O. Indekeu, H.E. Stanley and C. Tsallis
More articles in Physica A: Statistical Mechanics and its Applications from Elsevier
Bibliographic data for series maintained by Catherine Liu ().