Generative AI Meets Open-Ended Survey Responses: Research Participant Use of AI and Homogenization
Simone Zhang,
Janet Xu and
Alvero Aj
Sociological Methods & Research, 2025, vol. 54, issue 3, 1197-1242
Abstract:
The growing popularity of generative artificial intelligence (AI) tools presents new challenges for data quality in online surveys and experiments. This study examines participants’ use of large language models to answer open-ended survey questions and describes empirical tendencies in human versus large language model (LLM)-generated text responses. In an original survey of research participants recruited from a popular online platform for sourcing social science research subjects, 34 percent reported using LLMs to help them answer open-ended survey questions. Simulations comparing human-written responses from three pre-ChatGPT studies with LLM-generated text reveal that LLM responses are more homogeneous and positive, particularly when they describe social groups in sensitive questions. These homogenization patterns may mask important underlying social variation in attitudes and beliefs among human subjects, raising concerns about data validity. Our findings shed light on the scope and potential consequences of participants’ LLM use in online research.
Keywords: surveys; data quality; large language models; generative AI; human-AI interaction (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://journals.sagepub.com/doi/10.1177/00491241251327130 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:sae:somere:v:54:y:2025:i:3:p:1197-1242
DOI: 10.1177/00491241251327130
Access Statistics for this article
More articles in Sociological Methods & Research
Bibliographic data for series maintained by SAGE Publications ().