Drivers and social implications of Artificial Intelligence adoption in healthcare during the COVID-19 pandemic
Darius-Aurel Frank,
Christian T. Elbaek,
Caroline Kjaer Borsting,
Panagiotis Mitkidis,
Tobias Otterbring and
Sylvie Borau
Additional contact information
Darius-Aurel Frank: Aarhus University [Aarhus]
Christian T. Elbaek: Aarhus University [Aarhus]
Caroline Kjaer Borsting: Aarhus University [Aarhus]
Panagiotis Mitkidis: Duke University [Durham]
Tobias Otterbring: UIA - University of Agder
Sylvie Borau: TSE-R - Toulouse School of Economics - UT Capitole - Université Toulouse Capitole - UT - Université de Toulouse - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement
Post-Print from HAL
Abstract:
The COVID-19 pandemic continues to impact people worldwide–steadily depleting scarce resources in healthcare. Medical Artificial Intelligence (AI) promises a much-needed relief but only if the technology gets adopted at scale. The present research investigates people's intention to adopt medical AI as well as the drivers of this adoption in a representative study of two European countries (Denmark and France, N = 1068) during the initial phase of the COVID-19 pandemic. Results reveal AI aversion; only 1 of 10 individuals choose medical AI over human physicians in a hypothetical triage-phase of COVID-19 pre-hospital entrance. Key predictors of medical AI adoption are people's trust in medical AI and, to a lesser extent, the trait of open-mindedness. More importantly, our results reveal that mistrust and perceived uniqueness neglect from human physicians, as well as a lack of social belonging significantly increase people's medical AI adoption. These results suggest that for medical AI to be widely adopted, people may need to express less confidence in human physicians and to even feel disconnected from humanity. We discuss the social implications of these findings and propose that successful medical AI adoption policy should focus on trust building measures–without eroding trust in human physicians.
Keywords: Covid 19; Artificial intelligence; Pandemics (search for similar items in EconPapers)
Date: 2021-11
References: Add references at CitEc
Citations: View citations in EconPapers (1)
Published in PLoS ONE, 2021, 16 (11), ⟨10.1371/journal.pone.0259928⟩
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-03585448
DOI: 10.1371/journal.pone.0259928
Access Statistics for this paper
More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().