Interpretable artificial intelligence systems in medical imaging: review and theoretical framework
Tiantian Xian,
Panos Constantinides and
Nikolay Mehandjiev
Chapter 14 in Research Handbook on Artificial Intelligence and Decision Making in Organizations, 2024, pp 240-265 from Edward Elgar Publishing
Abstract:
The development of Interpretable Artificial Intelligence (AI) has drawn substantial attention on the effect of AI on augmenting human decision-making. In this paper, we review the literature on medical imaging to develop a framework of Interpretable AI systems in enabling the diagnostic process. We identify three components as constituting Interpretable AI systems, namely, human agents, data, machine learning (ML) models, and discuss their classifications and dimensions. Using the workflow process of AI augmented breast screening in the UK as an example, we identify the possible tensions that may emerge as human agents work with ML models and data. We discuss how these tensions may impact the performance of Interpretable AI systems in the diagnostic process and conclude with implications for further research.
Keywords: Business and Management; Innovations and Technology (search for similar items in EconPapers)
Date: 2024
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.elgaronline.com/doi/10.4337/9781803926216.00023 (application/pdf)
Our link check indicates that this URL is bad, the error code is: 503 Service Temporarily Unavailable
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:elg:eechap:21708_14
Ordering information: This item can be ordered from
http://www.e-elgar.com
Access Statistics for this chapter
More chapters in Chapters from Edward Elgar Publishing
Bibliographic data for series maintained by Darrel McCalla ().