Differential Elite Learning Particle Swarm Optimization for Global Numerical Optimization
Qiang Yang,
Xu Guo,
Xu-Dong Gao,
Dong-Dong Xu and
Zhen-Yu Lu
Additional contact information
Qiang Yang: School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
Xu Guo: School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
Xu-Dong Gao: School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
Dong-Dong Xu: School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
Zhen-Yu Lu: School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
Mathematics, 2022, vol. 10, issue 8, 1-32
Abstract:
Although particle swarm optimization (PSO) has been successfully applied to solve optimization problems, its optimization performance still encounters challenges when dealing with complicated optimization problems, especially those with many interacting variables and many wide and flat local basins. To alleviate this issue, this paper proposes a differential elite learning particle swarm optimization (DELPSO) by differentiating the two guiding exemplars as much as possible to direct the update of each particle. Specifically, in this optimizer, particles in the current swarm are divided into two groups, namely the elite group and non-elite group, based on their fitness. Then, particles in the non-elite group are updated by learning from those in the elite group, while particles in the elite group are not updated and directly enter the next generation. To comprise fast convergence and high diversity at the particle level, we let each particle in the non-elite group learn from two differential elites in the elite group. In this way, the learning effectiveness and the learning diversity of particles is expectedly improved to a large extent. To alleviate the sensitivity of the proposed DELPSO to the newly introduced parameters, dynamic adjustment strategies for parameters were further designed. With the above two main components, the proposed DELPSO is expected to compromise the search intensification and diversification well to explore and exploit the solution space properly to obtain promising performance. Extensive experiments conducted on the widely used CEC 2017 benchmark set with three different dimension sizes demonstrated that the proposed DELPSO achieves highly competitive or even much better performance than state-of-the-art PSO variants.
Keywords: particle swarm optimization; differential elite learning; swarm intelligence; global optimization; multimodal problems (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (3)
Downloads: (external link)
https://www.mdpi.com/2227-7390/10/8/1261/pdf (application/pdf)
https://www.mdpi.com/2227-7390/10/8/1261/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:10:y:2022:i:8:p:1261-:d:791349
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().