Mathematical Generalization of Kolmogorov-Arnold Networks (KAN) and Their Variants
Fray L. Becerra-Suarez (),
Ana G. Borrero-Ramírez,
Edwin Valencia-Castillo and
Manuel G. Forero ()
Additional contact information
Fray L. Becerra-Suarez: Grupo de Investigación en Inteligencia Artificial (UMA-AI), Facultad de Ingeniería y Negocios, Universidad Privada Norbert Wiener, Lima 15046, Peru
Ana G. Borrero-Ramírez: Semillero Lún, Universidad de Ibagué, Ibague 730001, Colombia
Edwin Valencia-Castillo: Departamento de Sistemas, Estadística e Informática, Universidad Nacional de Cajamarca, Cajamarca 06001, Peru
Manuel G. Forero: Semillero Lún, Universidad de Ibagué, Ibague 730001, Colombia
Mathematics, 2025, vol. 13, issue 19, 1-30
Abstract:
Neural networks have become a fundamental tool for solving complex problems, from image processing and speech recognition to time series prediction and large-scale data classification. However, traditional neural architectures suffer from interpretability problems due to their opaque representations and lack of explicit interaction between linear and nonlinear transformations. To address these limitations, Kolmogorov–Arnold Networks (KAN) have emerged as a mathematically grounded approach capable of efficiently representing complex nonlinear functions. Based on the principles established by Kolmogorov and Arnold, KAN offer an alternative to traditional architectures, mitigating issues such as overfitting and lack of interpretability. Despite their solid theoretical basis, practical implementations of KAN face challenges, such as optimal function selection and computational efficiency. This paper provides a systematic review that goes beyond previous surveys by consolidating the diverse structural variants of KAN (e.g., Wavelet-KAN, Rational-KAN, MonoKAN, Physics-KAN, Linear Spline KAN, and Orthogonal Polynomial KAN) into a unified framework. In addition, we emphasize their mathematical foundations, compare their advantages and limitations, and discuss their applicability across domains. From this review, three main conclusions can be drawn: (i) spline-based KAN remain the most widely used due to their stability and simplicity, (ii) rational and wavelet-based variants provide greater expressivity but introduce numerical challenges, and (iii) emerging approaches such as Physics-KAN and automatic basis selection open promising directions for scalability and interpretability. These insights provide a benchmark for future research and practical implementations of KAN.
Keywords: Kolmogorov-Arnold Networks; neural networks; interpretability; machine learning; deep learning; function approximation; computational efficiency; nonlinear functions (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/13/19/3128/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/19/3128/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:19:p:3128-:d:1761862
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().