Maximum Hallucination Standards for Domain-Specific Large Language Models
Tingmingke Lu
Papers from arXiv.org
Abstract:
Large language models (LLMs) often generate inaccurate yet credible-sounding content, known as hallucinations. This inherent feature of LLMs poses significant risks, especially in critical domains. I analyze LLMs as a new class of engineering products, treating hallucinations as a product attribute. I demonstrate that, in the presence of imperfect awareness of LLM hallucinations and misinformation externalities, net welfare improves when the maximum acceptable level of LLM hallucinations is designed to vary with two domain-specific factors: the willingness to pay for reduced LLM hallucinations and the marginal damage associated with misinformation.
Date: 2025-03
References: Add references at CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2503.05481 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2503.05481
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().