A Bitter Pill to Swallow? The Consequences of Patient Evaluation in Online Health Question-and-Answer Platforms
Chen Chen () and
Dylan Walker ()
Additional contact information
Chen Chen: School of Management and Economics, The Chinese University of Hong Kong, Shenzhen 518172, China
Dylan Walker: George L. Argyros School of Business and Economics, Chapman University, Orange, California 92866
Information Systems Research, 2023, vol. 34, issue 3, 867-889
Abstract:
Online health question-and-answer (Q&A) platforms (OHQPs), where patients post health-related questions, evaluate advice from multiple doctors, and direct a bounty (monetary reward) to their most preferred answer, have become a prominent channel for patients to receive medical advice in China. To explore the quality of medical advice on these platforms, we analyzed data on patients’ evaluation of ∼497,000 answers to ∼114,000 questions on one of the most popular OHQPs, 120ask.com, over a three-month period. We assembled a panel of independent physicians and instructed them to evaluate the quality of ∼13,000 answers. We found that the quality of medical advice offered on the platform was high on average, and that low-quality answers were rare (6%). However, our results also indicate that patients lacked the ability to discriminate advice quality. They were as likely to choose the best answer as the worst. The medical accuracy of patient evaluation was worse in critical categories (cancer, internal medicine) and for vulnerable subpopulations (pediatrics). Given that millions of patients seek medical advice from OHQPs in China annually, the social and economic implications of this finding are troubling. To understand how patients evaluate advice, we trained deep neural networks to think like patients, allowing us to identify patients’ positive and negative responses to different heurist cues. Although our results indicate that OHQPs perform well, we identified several concerns that should be addressed through platform design and policy changes. Because the Q&A process lacks peer review mechanisms, signals of advice quality are not conveyed to patients, forcing them to rely on heuristic cues, which cannot effectively guide them toward the best advice. We also found that the platform reputation metric was not correlated with the quality of the advice giver’s advice, may effectively encourage patients to select lesser quality medical advice, and increased the risk of moral hazard for malicious players to intentionally provide less accurate but more agreeable advice for personal gain. Our analysis revealed bad actors on the platform, including drug promoters and spammers. Finally, we found that OHQPs exacerbated care avoidance . We discuss several potential policy changes to address these shortcomings.
Keywords: online healthcare; patient evaluation; care avoidance; deep learning; online health consulting; peer evaluation (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://dx.doi.org/10.1287/isre.2022.1158 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:orisre:v:34:y:2023:i:3:p:867-889
Access Statistics for this article
More articles in Information Systems Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().