A Dimension Group-Based Comprehensive Elite Learning Swarm Optimizer for Large-Scale Optimization
Qiang Yang,
Kai-Xuan Zhang,
Xu-Dong Gao,
Dong-Dong Xu,
Zhen-Yu Lu,
Sang-Woon Jeon and
Jun Zhang
Additional contact information
Qiang Yang: School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
Kai-Xuan Zhang: School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
Xu-Dong Gao: School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
Dong-Dong Xu: School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
Zhen-Yu Lu: School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
Sang-Woon Jeon: Department of Electrical and Electronic Engineering, Hanyang University, Ansan 15588, Korea
Jun Zhang: Department of Electrical and Electronic Engineering, Hanyang University, Ansan 15588, Korea
Mathematics, 2022, vol. 10, issue 7, 1-32
Abstract:
High-dimensional optimization problems are more and more common in the era of big data and the Internet of things (IoT), which seriously challenge the optimization performance of existing optimizers. To solve these kinds of problems effectively, this paper devises a dimension group-based comprehensive elite learning swarm optimizer (DGCELSO) by integrating valuable evolutionary information in different elite particles in the swarm to guide the updating of inferior ones. Specifically, the swarm is first separated into two exclusive sets, namely the elite set ( ES ) containing the top best individuals, and the non-elite set ( NES ), consisting of the remaining individuals. Then, the dimensions of each particle in NES are randomly divided into several groups with equal sizes. Subsequently, each dimension group of each non-elite particle is guided by two different elites randomly selected from ES . In this way, each non-elite particle in NES is comprehensively guided by multiple elite particles in ES . Therefore, not only could high diversity be maintained, but fast convergence is also likely guaranteed. To alleviate the sensitivity of DGCELSO to the associated parameters, we further devise dynamic adjustment strategies to change the parameter settings during the evolution. With the above mechanisms, DGCELSO is expected to explore and exploit the solution space properly to find the optimum solutions for optimization problems. Extensive experiments conducted on two commonly used large-scale benchmark problem sets demonstrate that DGCELSO achieves highly competitive or even much better performance than several state-of-the-art large-scale optimizers.
Keywords: large-scale optimization; particle swarm optimization; dimension group-based comprehensive elite learning; high-dimensional problems; elite learning (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/10/7/1072/pdf (application/pdf)
https://www.mdpi.com/2227-7390/10/7/1072/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:10:y:2022:i:7:p:1072-:d:780419
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().