Trustworthiness of Deep Learning Under Adversarial Attacks in Power Systems
Dowens Nicolas,
Kevin Orozco,
Steve Mathew,
Yi Wang (),
Wafa Elmannai and
George C. Giakos
Additional contact information
Dowens Nicolas: Electrical and Computer Engineering Department, Manhattan University, Riverdale, NY 10471, USA
Kevin Orozco: Electrical and Computer Engineering Department, Manhattan University, Riverdale, NY 10471, USA
Steve Mathew: Electrical and Computer Engineering Department, Manhattan University, Riverdale, NY 10471, USA
Yi Wang: Electrical and Computer Engineering Department, Manhattan University, Riverdale, NY 10471, USA
Wafa Elmannai: Electrical and Computer Engineering Department, Manhattan University, Riverdale, NY 10471, USA
George C. Giakos: Electrical and Computer Engineering Department, Manhattan University, Riverdale, NY 10471, USA
Energies, 2025, vol. 18, issue 10, 1-22
Abstract:
Advanced as they are, DL models in cyber-physical systems remain vulnerable to attacks like the Fast Gradient Sign Method, DeepFool, and Jacobian-Based Saliency Map Attacks, rendering system trustworthiness impeccable in applications with high stakes like power systems. In power grids, DL models such as Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks are commonly utilized for tasks like state estimation, load forecasting, and fault detection, depending on their ability to learn complex, non-linear patterns in high-dimensional data such as voltage, current, and frequency measurements. Nevertheless, these models are susceptible to adversarial attacks, which could lead to inaccurate predictions and system failure. In this paper, the impact of these attacks on DL models is analyzed by employing the use of defensive countermeasures such as Adversarial Training, Gaussian Augmentation, and Feature Squeezing, to investigate vulnerabilities in industrial control systems with potentially disastrous real-world impacts. Emphasizing the inherent requirement of robust defense, this initiative lays the groundwork for follow-on initiatives to incorporate security and resilience into ML and DL algorithms and ensure mission-critical AI system dependability.
Keywords: adversarial attacks; deep learning; machine learning; power systems; adversarial attacks; trustworthiness (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2025
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/1996-1073/18/10/2611/pdf (application/pdf)
https://www.mdpi.com/1996-1073/18/10/2611/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:18:y:2025:i:10:p:2611-:d:1658752
Access Statistics for this article
Energies is currently edited by Ms. Agatha Cao
More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().