OPT-RNN-DBSVM: OPTimal Recurrent Neural Network and Density-Based Support Vector Machine
Karim El Moutaouakil (),
Abdellatif El Ouissari,
Adrian Olaru (),
Vasile Palade () and
Mihaela Ciorei
Additional contact information
Karim El Moutaouakil: Engineering Science Laboratory, Taza Multidisciplinary Faculty, Sidi Mohamed Ben Abdellah University, Fez 30000, Morocco
Abdellatif El Ouissari: Engineering Science Laboratory, Taza Multidisciplinary Faculty, Sidi Mohamed Ben Abdellah University, Fez 30000, Morocco
Adrian Olaru: Centre for Computational Science and Mathematical Modelling, Coventry University, Priory Road, Coventry CV1 5FB, UK
Vasile Palade: Department of Robotics and Production System, University Politehnica of Bucharest, 060042 Bucharest, Romania
Mihaela Ciorei: Department of Robotics and Production System, University Politehnica of Bucharest, 060042 Bucharest, Romania
Mathematics, 2023, vol. 11, issue 16, 1-28
Abstract:
When implementing SVMs, two major problems are encountered: (a) the number of local minima of dual-SVM increases exponentially with the number of samples and (b) the computer storage memory required for a regular quadratic programming solver increases exponentially as the problem size expands. The Kernel-Adatron family of algorithms, gaining attention recently, has allowed us to handle very large classification and regression problems. However, these methods treat different types of samples (i.e., noise, border, and core) in the same manner, which makes these algorithms search in unpromising areas and increases the number of iterations as well. This paper introduces a hybrid method to overcome such shortcomings, called the Optimal Recurrent Neural Network and Density-Based Support Vector Machine (Opt-RNN-DBSVM). This method consists of four steps: (a) the characterization of different samples, (b) the elimination of samples with a low probability of being a support vector, (c) the construction of an appropriate recurrent neural network to solve the dual-DBSVM based on an original energy function, and (d) finding the solution to the system of differential equations that govern the dynamics of the RNN, using the Euler–Cauchy method involving an optimal time step. Density-based preprocessing reduces the number of local minima in the dual-SVM. The RNN’s recurring architecture avoids the need to explore recently visited areas. With the optimal time step, the search moves from the current vectors to the best neighboring support vectors. It is demonstrated that RNN-SVM converges to feasible support vectors and Opt-RNN-DBSVM has very low time complexity compared to the RNN-SVM with a constant time step and the Kernel-Adatron algorithm–SVM. Several classification performance measures are used to compare Opt-RNN-DBSVM with different classification methods and the results obtained show the good performance of the proposed method.
Keywords: Recurrent Neural Network (RNN); Density-Based Algorithm; Support Vector Machine (SVM); Kernel-Adatron algorithm (KA); Euler–Cauchy algorithm (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://www.mdpi.com/2227-7390/11/16/3555/pdf (application/pdf)
https://www.mdpi.com/2227-7390/11/16/3555/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:11:y:2023:i:16:p:3555-:d:1219125
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().