Examining the impact of personalization and carefulness in AI-generated health advice: Trust, adoption, and insights in online healthcare consultations experiments
Hongyi Qin,
Yifan Zhu,
Yan Jiang,
Siqi Luo and
Cui Huang
Technology in Society, 2024, vol. 79, issue C
Abstract:
Artificial intelligence (AI) technologies, exemplified by health chatbots, are transforming the healthcare industry. Their widespread application has the potential to enhance decision-making efficiency, improve the quality of healthcare services, and reduce medical costs. While there is ongoing discussion about the opportunities and challenges brought by AI, more needs to be known about the public's attitude towards its use in the healthcare domain. Understanding public attitudes can help policymakers better grasp their needs and involve them in making decisions that benefit both technological development and social welfare. Therefore, this study presents evidence from two between-subjects experiments. This study aims to compare the public's adoption and trust levels in health advice provided by human vs. AI doctors and explore the potential effects of personalization and carefulness on the public's attitudes. Experimental designs adopt a trust-centered, cognitively and emotionally balanced perspective to study the public's intention to adopt AI. In Experiment 1, the experimental conditions involve the types of decision-makers providing online consultation advice, either AI or human doctors. In Experiment 2, the experimental conditions involve varying levels of perceived personalization and carefulness (high vs. low). A total of 734 participants took part in the study. They were randomly assigned to one of the intervention conditions and responded to manipulation checks after reading the materials. Using a seven-point Likert-type scale, participants rated their cognitive and emotional trust levels and intention to adopt the advice. Partial Least Squares Structural Equation Modeling (PLS-SEM) is conducted to estimate the proposed theoretical perspective. Qualitative interviews on both real-world and AI-generated treatment recommendations further enriched the understanding of public perceptions.The results show that AI-generated advice is generally slightly less trusted and adopted by the public. However, a noticeable inclination towards AI-generated advice emerges when AI demonstrates proficiency in understanding individuals' health conditions and providing empathetic consultations. Further analyses confirm the mediating influence of emotional trust between cognitive trust and adoption intention. These findings provide deeper insights into the process of adoption and trust formation. Moreover, they offer guidance to digital healthcare providers, empowering them with the knowledge to co-design AI implementation strategies that cater to the public's expectations.
Keywords: AI health chatbots; ChatGPT; Online healthcare consultations ; public adoption; Cognitive trust; emotional trust (search for similar items in EconPapers)
Date: 2024
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0160791X24002744
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:teinso:v:79:y:2024:i:c:s0160791x24002744
DOI: 10.1016/j.techsoc.2024.102726
Access Statistics for this article
Technology in Society is currently edited by Charla Griffy-Brown
More articles in Technology in Society from Elsevier
Bibliographic data for series maintained by Catherine Liu ().