A Comparison between Explainable Machine Learning Methods for Classification and Regression Problems in the Actuarial Context
Catalina Lozano-Murcia,
Francisco P. Romero (),
Jesus Serrano-Guerrero and
Jose A. Olivas
Additional contact information
Catalina Lozano-Murcia: Department of Information Systems and Technologies, University of Castilla La Mancha, 13071 Ciudad Real, Spain
Francisco P. Romero: Department of Information Systems and Technologies, University of Castilla La Mancha, 13071 Ciudad Real, Spain
Jesus Serrano-Guerrero: Department of Information Systems and Technologies, University of Castilla La Mancha, 13071 Ciudad Real, Spain
Jose A. Olivas: Department of Information Systems and Technologies, University of Castilla La Mancha, 13071 Ciudad Real, Spain
Mathematics, 2023, vol. 11, issue 14, 1-20
Abstract:
Machine learning, a subfield of artificial intelligence, emphasizes the creation of algorithms capable of learning from data and generating predictions. However, in actuarial science, the interpretability of these models often presents challenges, raising concerns about their accuracy and reliability. Explainable artificial intelligence (XAI) has emerged to address these issues by facilitating the development of accurate and comprehensible models. This paper conducts a comparative analysis of various XAI approaches for tackling distinct data-driven insurance problems. The machine learning methods are evaluated based on their accuracy, employing the mean absolute error for regression problems and the accuracy metric for classification problems. Moreover, the interpretability of these methods is assessed through quantitative and qualitative measures of the explanations offered by each explainability technique. The findings reveal that the performance of different XAI methods varies depending on the particular insurance problem at hand. Our research underscores the significance of considering accuracy and interpretability when selecting a machine-learning approach for resolving data-driven insurance challenges. By developing accurate and comprehensible models, we can enhance the transparency and trustworthiness of the predictions generated by these models.
Keywords: machine learning; artificial intelligence; explainable machine learning; accuracy; interpretability (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://www.mdpi.com/2227-7390/11/14/3088/pdf (application/pdf)
https://www.mdpi.com/2227-7390/11/14/3088/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:11:y:2023:i:14:p:3088-:d:1193020
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().