Questioning the ability of feature-based explanations to empower non-experts in robo-advised financial decision-making
Astrid Bertrand (),
James Eagan () and
Winston Maxwell ()
Additional contact information
Astrid Bertrand: DIVA - Design, Interaction, Visualization & Applications - LTCI - Laboratoire Traitement et Communication de l'Information - IMT - Institut Mines-Télécom [Paris] - Télécom Paris - IMT - Institut Mines-Télécom [Paris] - IP Paris - Institut Polytechnique de Paris, INFRES - Département Informatique et Réseaux - Télécom ParisTech
James Eagan: DIVA - Design, Interaction, Visualization & Applications - LTCI - Laboratoire Traitement et Communication de l'Information - IMT - Institut Mines-Télécom [Paris] - Télécom Paris - IMT - Institut Mines-Télécom [Paris] - IP Paris - Institut Polytechnique de Paris, INFRES - Département Informatique et Réseaux - Télécom ParisTech
Winston Maxwell: NOS - Numérique, Organisation et Société - I3 SES - Institut interdisciplinaire de l’innovation de Telecom Paris - Télécom Paris - IMT - Institut Mines-Télécom [Paris] - IP Paris - Institut Polytechnique de Paris - I3 - Institut interdisciplinaire de l’innovation - CNRS - Centre National de la Recherche Scientifique, SES - Département Sciences Economiques et Sociales - Télécom Paris - IMT - Institut Mines-Télécom [Paris] - IP Paris - Institut Polytechnique de Paris
Post-Print from HAL
Abstract:
Robo-advisors are democratizing access to life-insurance by enabling fully online underwriting. In Europe, financial legislation requires that the reasons for recommending a life insurance plan be explained according to the characteristics of the client, in order to empower the client to make a "fully informed decision". In this study conducted in France, we seek to understand whether legal requirements for feature-based explanations actually help users in their decision-making. We conduct a qualitative study to characterize the explainability needs formulated by non-expert users and by regulators expert in customer protection. We then run a large-scale quantitative study using Robex, a simplified robo-advisor built using ecological interface design that delivers recommendations with explanations in different hybrid textual and visual formats: either "dialogic"-more textual-or "graphical"-more visual. We find that providing feature-based explanations does not improve appropriate reliance or understanding compared to not providing any explanation. In addition, dialogic explanations increase users' trust in the recommendations of the robo-advisor, sometimes to the users' detriment. This real-world scenario illustrates how XAI can address information asymmetry in complex areas such as finance. This work has implications for other critical, AI-based recommender systems, where the General Data Protection Regulation (GDPR) may require similar provisions for feature-based explanations. CCS CONCEPTS • Human-centered computing → Empirical studies in HCI.
Keywords: explainability intelligibility AI regulation financial inclusion; explainability; intelligibility; AI regulation; financial inclusion (search for similar items in EconPapers)
Date: 2023-06-12
New Economics Papers: this item is included in nep-ain, nep-cmp and nep-fle
Note: View the original document on HAL open archive server: https://hal.science/hal-04125939v1
References: View references in EconPapers View complete reference list from CitEc
Citations:
Published in FAccT '23: the 2023 ACM Conference on Fairness, Accountability, and Transparency, Jun 2023, Chicago, United States. pp.943-958, ⟨10.1145/3593013.3594053⟩
Downloads: (external link)
https://hal.science/hal-04125939v1/document (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-04125939
DOI: 10.1145/3593013.3594053
Access Statistics for this paper
More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().