EconPapers    
Economics at your fingertips  
 

Analytical Energy Model Parametrized by Workload, Clock Frequency and Number of Active Cores for Share-Memory High-Performance Computing Applications

Vitor Ramos Gomes da Silva, Carlos Valderrama, Pierre Manneback and Samuel Xavier- de-Souza
Additional contact information
Vitor Ramos Gomes da Silva: Department of Electronics and Microelectronics (SEMi), University of Mons, 7000 Mons, Belgium
Carlos Valderrama: Department of Electronics and Microelectronics (SEMi), University of Mons, 7000 Mons, Belgium
Pierre Manneback: Department of Electronics and Microelectronics (SEMi), University of Mons, 7000 Mons, Belgium
Samuel Xavier- de-Souza: Department of Computer Engineering and Automation, Universidade Federal do Rio Grande do Norte, Natal 59078-970, Brazil

Energies, 2022, vol. 15, issue 3, 1-22

Abstract: Energy consumption is crucial in high-performance computing (HPC), especially to enable the next exascale generation. Hence, modern systems implement various hardware and software features for power management. Nonetheless, due to numerous different implementations, we can always push the limits of software to achieve the most efficient use of our hardware. To be energy efficient, the software relies on dynamic voltage and frequency scaling (DVFS), as well as dynamic power management (DPM). Yet, none have privileged information on the hardware architecture and application behavior, which may lead to energy-inefficient software operation. This study proposes analytical modeling for architecture and application behavior that can be used to estimate energy-optimal software configurations and provide knowledgeable hints to improve DVFS and DPM techniques for single-node HPC applications. Additionally, model parameters, such as the level of parallelism and dynamic power, provide insights into how the modeled application consumes energy, which can be helpful for energy-efficient software development and operation. This novel analytical model takes the number of active cores, the operating frequencies, and the input size as inputs to provide energy consumption estimation. We present the modeling of 13 parallel applications employed to determine energy-optimal configurations for several different input sizes. The results show that up to 70% of energy could be saved in the best scenario compared to the default Linux choice and 14% on average. We also compare the proposed model with standard machine-learning modeling concerning training overhead and accuracy. The results show that our approach generates about 10 times less energy overhead for the same level of accuracy.

Keywords: energy model; dynamic frequency and voltage scaling; dynamic power management; high performance computing (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/1996-1073/15/3/1213/pdf (application/pdf)
https://www.mdpi.com/1996-1073/15/3/1213/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:15:y:2022:i:3:p:1213-:d:743778

Access Statistics for this article

Energies is currently edited by Ms. Agatha Cao

More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jeners:v:15:y:2022:i:3:p:1213-:d:743778