NSGA-II/SDR-OLS: A Novel Large-Scale Many-Objective Optimization Method Using Opposition-Based Learning and Local Search
Yingxin Zhang,
Gaige Wang () and
Hongmei Wang
Additional contact information
Yingxin Zhang: School of Computer Science and Technology, Ocean University of China, Qingdao 266100, China
Gaige Wang: School of Computer Science and Technology, Ocean University of China, Qingdao 266100, China
Hongmei Wang: Information Engineering College, Xinjiang Institute of Engineering, Urumqi 830023, China
Mathematics, 2023, vol. 11, issue 8, 1-32
Abstract:
Recently, many-objective optimization problems (MaOPs) have become a hot issue of interest in academia and industry, and many more many-objective evolutionary algorithms (MaOEAs) have been proposed. NSGA-II/SDR (NSGA-II with a strengthened dominance relation) is an improved NSGA-II, created by replacing the traditional Pareto dominance relation with a new dominance relation, termed SDR, which is better than the original algorithm in solving small-scale MaOPs with few decision variables, but performs poorly in large-scale MaOPs. To address these problems, we added the following improvements to the NSGA-II/SDR to obtain NSGA-II/SDR-OLS, which enables it to better achieve a balance between population convergence and diversity when solving large-scale MaOPs: (1) The opposition-based learning (OBL) strategy is introduced in the initial population initialization stage, and the final initial population is formed by the initial population and the opposition-based population, which optimizes the quality and convergence of the population; (2) the local search (LS) strategy is introduced to expand the diversity of populations by finding neighborhood solutions, in order to avoid solutions falling into local optima too early. NSGA-II/SDR-OLS is compared with the original algorithm on nine benchmark problems to verify the effectiveness of its improvement. Then, we compare our algorithm with six existing algorithms, which are promising region-based multi-objective evolutionary algorithms (PREA), a scalable small subpopulation-based covariance matrix adaptation evolution strategy (S3-CMA-ES), a decomposition-based multi-objective evolutionary algorithm guided by growing neural gas (DEA-GNG), a reference vector-guided evolutionary algorithm (RVEA), NSGA-II with conflict-based partitioning strategy (NSGA-II-conflict), and a genetic algorithm using reference-point-based non-dominated sorting (NSGA-III).The proposed algorithm has achieved the best results in the vast majority of test cases, indicating that our algorithm has strong competitiveness.
Keywords: evolutionary algorithm; many-objective optimization; large-scale optimization; opposition-based learning; local search (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/11/8/1911/pdf (application/pdf)
https://www.mdpi.com/2227-7390/11/8/1911/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:11:y:2023:i:8:p:1911-:d:1126279
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().