Artificial Intelligence: impacts of explainability on value creation and decision making
Taoufik El Oualidi
Post-Print from HAL
Abstract:
Over the last few years, companies' investment in new AI systems has seen a strong and constant progression. However, except for the Big Tech, the use of AI is still marginal at this stage, and seems to spark cautiousness and apprehension. A potential reason for this hesitation may be linked to a lack of trust related in particular to the so-called black box AI technologies such as deep learning. This is why our research objective is to explore the effects of explainability on trust in these new AI-based digital systems with which the users can either interact or directly accept its results in case of fully autonomous system. More precisely, in the perspective of an industrialized use of AI, we would like to study the role of explainability for stakeholders in the decision-making process as well as in value creation.
Keywords: Artificial Intelligence; Explainability; Trust; Value creation; Decision making; Machine learning (search for similar items in EconPapers)
Date: 2022-05-17
References: Add references at CitEc
Citations:
Published in Research Challenges in Information Science, 16th International Conference, RCIS 2022, Barcelona, Spain, May 17–20, 2022, Proceedings, May 2022, Barcelona, Spain. pp.795-802, ⟨10.1007/978-3-031-05760-1_57⟩
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-04103938
DOI: 10.1007/978-3-031-05760-1_57
Access Statistics for this paper
More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().