EconPapers    
Economics at your fingertips  
 

Comparing large language models and search engine responses to common orthodontic questions

Yuanyuan Ren and Jing Sun

PLOS ONE, 2026, vol. 21, issue 1, 1-13

Abstract: Background: Large Language Models (LLMs) highlight their potential in supporting patient education and self-management. Their performance in responses to orthodontic questions has yet to be explored. Objectives: This study aims to compare the quality, empathy, readability, and satisfaction of responses from LLMs and search engines on common orthodontic questions. Methods: Forty-five common orthodontic questions (six categories) and a prompt were developed, and a self-designed multidimensional evaluation questionnaire was constructed. Questions were presented to 5 LLMs and 3 search engines on December,22,2024. The primary outcomes were the median expert-rated scores of LLMs versus search engine responses on quality, empathy, readability, and satisfaction, using 5- or 10-point Likert scales. Results: LLMs scored significantly higher than search engines in quality (4.00 vs. 3.50, p

Date: 2026
References: Add references at CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0339908 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 39908&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0339908

DOI: 10.1371/journal.pone.0339908

Access Statistics for this article

More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().

 
Page updated 2026-01-11
Handle: RePEc:plo:pone00:0339908