EconPapers    
Economics at your fingertips  
 

ChatGPT for complex text evaluation tasks

Mike Thelwall

Journal of the Association for Information Science & Technology, 2025, vol. 76, issue 4, 645-648

Abstract: ChatGPT and other large language models (LLMs) have been successful at natural and computer language processing tasks with varying degrees of complexity. This brief communication summarizes the lessons learned from a series of investigations into its use for the complex text analysis task of research quality evaluation. In summary, ChatGPT is very good at understanding and carrying out complex text processing tasks in the sense of producing plausible responses with minimum input from the researcher. Nevertheless, its outputs require systematic testing to assess their value because they can be misleading. In contrast to simple tasks, the outputs from complex tasks are highly varied and better results can be obtained by repeating the prompts multiple times in different sessions and averaging the ChatGPT outputs. Varying ChatGPT's configuration parameters from their defaults does not seem to be useful, except for the length of the output requested.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://doi.org/10.1002/asi.24966

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:bla:jinfst:v:76:y:2025:i:4:p:645-648

Ordering information: This journal article can be ordered from
http://www.blackwell ... bs.asp?ref=2330-1635

Access Statistics for this article

More articles in Journal of the Association for Information Science & Technology from Association for Information Science & Technology
Bibliographic data for series maintained by Wiley Content Delivery ().

 
Page updated 2025-03-22
Handle: RePEc:bla:jinfst:v:76:y:2025:i:4:p:645-648