EconPapers    
Economics at your fingertips  
 

Enhanced Parallel Sine Cosine Algorithm for Constrained and Unconstrained Optimization

Akram Belazi, Héctor Migallón, Daniel Gónzalez-Sánchez, Jorge Gónzalez-García, Antonio Jimeno-Morenilla and José-Luis Sánchez-Romero
Additional contact information
Akram Belazi: Laboratory RISC-ENIT (LR-16-ES07), Tunis El Manar University, Tunis 1002, Tunisia
Héctor Migallón: Department of Computer Engineering, Miguel Hernández University, 03202 Elche, Spain
Daniel Gónzalez-Sánchez: Department of Computer Engineering, Miguel Hernández University, 03202 Elche, Spain
Jorge Gónzalez-García: Department of Computer Engineering, Miguel Hernández University, 03202 Elche, Spain
Antonio Jimeno-Morenilla: Department of Computer Technology, University of Alicante, 03071 Alicante, Spain
José-Luis Sánchez-Romero: Department of Computer Technology, University of Alicante, 03071 Alicante, Spain

Mathematics, 2022, vol. 10, issue 7, 1-47

Abstract: The sine cosine algorithm’s main idea is the sine and cosine-based vacillation outwards or towards the best solution. The first main contribution of this paper proposes an enhanced version of the SCA algorithm called as ESCA algorithm. The supremacy of the proposed algorithm over a set of state-of-the-art algorithms in terms of solution accuracy and convergence speed will be demonstrated by experimental tests. When these algorithms are transferred to the business sector, they must meet time requirements dependent on the industrial process. If these temporal requirements are not met, an efficient solution is to speed them up by designing parallel algorithms. The second major contribution of this work is the design of several parallel algorithms for efficiently exploiting current multicore processor architectures. First, one-level synchronous and asynchronous parallel ESCA algorithms are designed. They have two favors; retain the proposed algorithm’s behavior and provide excellent parallel performance by combining coarse-grained parallelism with fine-grained parallelism. Moreover, the parallel scalability of the proposed algorithms is further improved by employing a two-level parallel strategy. Indeed, the experimental results suggest that the one-level parallel ESCA algorithms reduce the computing time, on average, by 87.4% and 90.8%, respectively, using 12 physical processing cores. The two-level parallel algorithms provide extra reductions of the computing time by 91.4%, 93.1%, and 94.5% with 16, 20, and 24 processing cores, including physical and logical cores. Comparison analysis is carried out on 30 unconstrained benchmark functions and three challenging engineering design problems. The experimental outcomes show that the proposed ESCA algorithm behaves outstandingly well in terms of exploration and exploitation behaviors, local optima avoidance, and convergence speed toward the optimum. The overall performance of the proposed algorithm is statistically validated using three non-parametric statistical tests, namely Friedman, Friedman aligned, and Quade tests.

Keywords: constrained optimization; metaheuristic; heuristic algorithm; OpenMP; parallel algorithms; SCA algorithm; unconstrained optimization (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)

Downloads: (external link)
https://www.mdpi.com/2227-7390/10/7/1166/pdf (application/pdf)
https://www.mdpi.com/2227-7390/10/7/1166/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:10:y:2022:i:7:p:1166-:d:786548

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:10:y:2022:i:7:p:1166-:d:786548