A Mathematical Investigation of Hallucination and Creativity in GPT Models
Minhyeok Lee ()
Additional contact information
Minhyeok Lee: School of Electrical and Electronics Engineering, Chung-Ang University, Seoul 06974, Republic of Korea
Mathematics, 2023, vol. 11, issue 10, 1-17
Abstract:
In this paper, we present a comprehensive mathematical analysis of the hallucination phenomenon in generative pretrained transformer (GPT) models. We rigorously define and measure hallucination and creativity using concepts from probability theory and information theory. By introducing a parametric family of GPT models, we characterize the trade-off between hallucination and creativity and identify an optimal balance that maximizes model performance across various tasks. Our work offers a novel mathematical framework for understanding the origins and implications of hallucination in GPT models and paves the way for future research and development in the field of large language models (LLMs).
Keywords: generative pretrained transformers; large language model; LLM; GPT; ChatGPT; hallucination; creativity (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
https://www.mdpi.com/2227-7390/11/10/2320/pdf (application/pdf)
https://www.mdpi.com/2227-7390/11/10/2320/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:11:y:2023:i:10:p:2320-:d:1148235
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().