How to Use LLMs Ethically in Academic Writing?
Tiantian Yu
Education Insights, 2025, vol. 2, issue 5, 25-39
Abstract:
This paper presents an experimental study based on selected Large Language Models (LLMs) and Artificial Intelligence Generated Content (AIGC) detection systems, conducted within a mixed-methods research paradigm that combines empirical validation and Qualitative Content Analysis (QCA). The empirical validation process consists of both a condition optimization experiment and the main experiment, while the materials for qualitative content analysis are directly derived from these experimental outputs. In the experiments, six LLMs are evaluated using four different AIGC detectors. Through the analysis of the contents generated by these LLMs, the existing theoretical framework, which is referred to as the authors’ checklist, for the application of LLMs in academic writing is revised. The updated framework refines the checklist step for assessing and amending the accuracy of AI-generated content. The updated framework contains five steps, Intellectual Contribution, Accuracy of Conceptions, Accuracy of Demonstrations, Academic Competency, and Transparency, for authors’ academic writing with the assistance of LLMs. Additionally, it emphasizes the importance of authors’ innovation and proficiency in prompting LLMs when ethically using LLMs in academic writing.
Keywords: large language models; academic writing; control experiment; qualitative content analysis (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://soapubs.com/index.php/EI/article/view/349/349 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:axf:journl:v:2:y:2025:i:5:p:25-39
Access Statistics for this article
More articles in Education Insights from Scientific Open Access Publishing
Bibliographic data for series maintained by Yuchi Liu ().