Secure and Transparent Banking: Explainable AI-Driven Federated Learning Model for Financial Fraud Detection
Saif Khalifa Aljunaid,
Saif Jasim Almheiri,
Hussain Dawood and
Muhammad Adnan Khan ()
Additional contact information
Saif Khalifa Aljunaid: School of Computing, Skyline University College, University City Sharjah, Sharjah 1797, United Arab Emirates
Saif Jasim Almheiri: School of Computing, Skyline University College, University City Sharjah, Sharjah 1797, United Arab Emirates
Hussain Dawood: School of Computing, Skyline University College, University City Sharjah, Sharjah 1797, United Arab Emirates
Muhammad Adnan Khan: School of Computing, Skyline University College, University City Sharjah, Sharjah 1797, United Arab Emirates
JRFM, 2025, vol. 18, issue 4, 1-26
Abstract:
The increasing sophistication of fraud has rendered rule-based fraud detection obsolete, exposing banks to greater financial risk, reputational damage, and regulatory penalties. Financial stability, customer trust, and compliance are increasingly threatened as centralized Artificial Intelligence (AI) models fail to adapt, leading to inefficiencies, false positives, and undetected detection. These limitations necessitate advanced AI solutions for banks to adapt properly to emerging fraud patterns. While AI enhances fraud detection, its black-box nature limits transparency, making it difficult for analysts to trust, validate, and refine decisions, posing challenges for compliance, fraud explanation, and adversarial defense. Effective fraud detection requires models that balance high accuracy and adaptability to emerging fraud patterns. Federated Learning (FL) enables distributed training for fraud detection while preserving data privacy and ensuring legal compliance. However, traditional FL approaches operate as black-box systems, limiting the analysts to trust, verify, or even improve the decisions made by AI in fraud detection. Explainable AI (XAI) enhances fraud analysis by improving interpretability, fostering trust, refining classifications, and ensuring compliance. The integration of XAI and FL forms a privacy-preserving and explainable model that enhances security and decision-making. This research proposes an Explainable FL (XFL) model for financial fraud detection, addressing both FL’s security and XAI’s interpretability. With the help of Shapley Additive Explanations (SHAP) and LIME, analysts can explain and improve fraud classification while maintaining privacy, accuracy, and compliance. The proposed model is trained on a financial fraud detection dataset, and the results highlight the efficiency of detection and successful elimination of false positives and contribute to the improvement of the existing models as the proposed model attained 99.95% accuracy and a miss rate of 0.05%, paving the way for a more effective and comprehensive AI-based system to detect potential fraudulence in banking.
Keywords: financial fraud detection; secure and transparent banking; artificial intelligence (AI); FL; XAI; XFL; SHAP (search for similar items in EconPapers)
JEL-codes: C E F2 F3 G (search for similar items in EconPapers)
Date: 2025
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/1911-8074/18/4/179/pdf (application/pdf)
https://www.mdpi.com/1911-8074/18/4/179/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jjrfmx:v:18:y:2025:i:4:p:179-:d:1621988
Access Statistics for this article
JRFM is currently edited by Ms. Chelthy Cheng
More articles in JRFM from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().