IFEval-Extended: Enhancing Instruction-Following Evaluation in Large Language Models through Dynamic Prompt Generation
Bohdan Kovalevskyi ()
Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023, 2024, vol. 5, issue 1, 513-524
Abstract:
This paper introduces IFEval-Extended, an innovative benchmark for evaluating the instruction-following capabilities of Large Language Models (LLMs). Building upon the foundational principles of the existing IFEval framework, IFEval-Extended addresses the limitations of predefined prompts by employing a dynamic, generative approach to instruction synthesis. This method allows for the creation of thousands of unique, human-like instructions from a single base template, mitigating the risk of overfitting and enhancing the diversity and robustness of the evaluation process. The benchmark extends the original set of instruction categories in IFEval, providing a more granular assessment of LLM performance across various parameters such as language structure, keyword usage, and response formatting. The study evaluates state-of-the-art LLMs, including GPT-4o, LLama 3.1 (8B), and LLama 3 (70B), using strict and loose accuracy metrics. Results reveal that while models excel in handling simpler instructions, they struggle with complex tasks requiring precise adherence to multiple constraints. The findings highlight the strengths and weaknesses of current LLM capabilities, offering valuable insights for model development and real-world applications. IFEval-Extended contributes to the ongoing development of more robust, scalable, and objective LLM evaluation methods, thereby advancing the field of Natural Language Processing.
Keywords: Large Language Models (LLMs); Natural Language Processing (NLP); Instruction Following; Benchmark; Evaluation Framework; Dynamic Prompt Generation; Overfitting; Scalability; Generalizability; Model Performance (search for similar items in EconPapers)
Date: 2024
References: Add references at CitEc
Citations:
Downloads: (external link)
https://newjaigs.com/index.php/JAIGS/article/view/299 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:das:njaigs:v:5:y:2024:i:1:p:513-524:id:299
Access Statistics for this article
Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 is currently edited by Justyna Żywiołek
More articles in Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 from Open Knowledge
Bibliographic data for series maintained by Open Knowledge ().