Textual analysis of insurance claims with large language models
Dongchen Li,
Zhuo Jin,
Linyi Qian and
Hailiang Yang
Journal of Risk & Insurance, 2025, vol. 92, issue 2, 505-535
Abstract:
This study proposes a comprehensive and general framework for examining discrepancies in textual content using large language models (LLMs), broadening application scenarios in the insurtech and risk management fields, and conducting empirical research based on actual needs and real‐world data. Our framework integrates OpenAI's interface to embed texts and project them into external categories while utilizing distance metrics to evaluate discrepancies. To identify significant disparities, we design prompts to analyze three types of relationships: identical information, logical relationships and potential relationships. Our empirical analysis shows that 22.1% of samples exhibit substantial semantic discrepancies, and 38.1% of the samples with significant differences contain at least one of the identified relationships. The average processing time for each sample does not exceed 4 s, and all processes can be adjusted based on actual needs. Backtesting results and comparisons with traditional NLP methods further demonstrate that our proposed method is both effective and robust.
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://doi.org/10.1111/jori.70004
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:bla:jrinsu:v:92:y:2025:i:2:p:505-535
Ordering information: This journal article can be ordered from
http://www.wiley.com/bw/subs.asp?ref=0022-4367
Access Statistics for this article
Journal of Risk & Insurance is currently edited by Joan T. Schmit
More articles in Journal of Risk & Insurance from The American Risk and Insurance Association Contact information at EDIRC.
Bibliographic data for series maintained by Wiley Content Delivery ().