Optimizing Physics-Informed Neural Networks with hybrid activation functions: A comparative study on improving residual loss and accuracy using partial differential equations
Husna Zafar,
Ahmad,,
Xiangyang Liu and
Muhammad Noveel Sadiq
Chaos, Solitons & Fractals, 2025, vol. 191, issue C
Abstract:
Physics-informed neural networks have bridged the gap between traditional numerical and deep learning based approaches in scientific computing. However, they still face limitations regarding improving convergence, accuracy, and minimizing residual loss, where the activation function plays a crucial role. Traditional activation functions often undergo vanishing gradient problems during backpropagation, highlighting the need for better alternatives for efficient training of Physics Informed Neural Networks. In this paper, new hybrid activation functions were proposed which combine the salient characteristics of traditional activation functions. These activation functions were tested with different network hyperparameters on the Swift–Hohenberg equation, a leading tool for modeling pattern development and evolution in fields like thermal convection, fluid, and temperature dynamics, as well as the Burgers equation. Manual tuning of hyperparameters is employed to critically assess the behavior of new activation functions in different experimental settings. Results show that hybrid activation functions have better learning capabilities compared to traditional activation functions. The GaussSwish hybrid activation function, in particular, proved to be highly effective across different network settings, showing better learning ability in training models for complex problems. This research also reveals that not only activation function but residual points sampled through different Monte Carlo sequences also influence the performance of Physics Informed Neural Networks.
Keywords: Deep learning; PDEs; Physics-Informed Neural Networks (PINNs); Hybrid activation functions; Swift–Hohenberg equations; 1D Burgers equation; Minimized residual loss; Enhanced training performance (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0960077924012797
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:chsofr:v:191:y:2025:i:c:s0960077924012797
DOI: 10.1016/j.chaos.2024.115727
Access Statistics for this article
Chaos, Solitons & Fractals is currently edited by Stefano Boccaletti and Stelios Bekiros
More articles in Chaos, Solitons & Fractals from Elsevier
Bibliographic data for series maintained by Thayer, Thomas R. ().