Evaluating AI-Generated Emails: A Comparative Efficiency Analysis
Marina Jovic and
Salaheddine Mnasri
World Journal of English Language, 2024, vol. 14, issue 2, 502
Abstract:
This study investigates the efficiency of large language models (LLMs) in producing routine, negative, and persuasive business emails for educational purposes within the context of Business Writing. Specifically, it compares the outputs generated by four widely-used LLMs (ChatGPT 3.5, Llama 2, Bing Chat, and Bard) when presented with identical email scenarios. These generated emails are evaluated using an elaborate rubric, allowing for a systematic assessment of LLMs' performance across three distinct email types. The results of the study show that the output with the same prompt varies greatly despite the rather formulaic nature of business emails. For instance, some LLMs struggle with following the requested structure and maintaining consistency in tone, while others have issues with unity and conciseness. The findings of this research hold implications for teaching business writing (rubrics, task instructions, in-class implementation), as well as for the integration of AI in professional communication at large.
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://www.sciedupress.com/journal/index.php/wjel/article/download/24659/15730 (application/pdf)
https://www.sciedupress.com/journal/index.php/wjel/article/view/24659 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:jfr:wjel11:v:14:y:2024:i:2:p:502
Access Statistics for this article
World Journal of English Language is currently edited by Joe Nelson
More articles in World Journal of English Language from Sciedu Press
Bibliographic data for series maintained by Sciedu Press ().