EconPapers    
Economics at your fingertips  
 

Enabling the Digitalization of Claim Management in the Insurance Value Chain Through AI-Based Prototypes: The ELIS Innovation Hub Approach

Alessandra Andreozzi, Lorenzo Ricciardi Celsi () and Antonella Martini ()
Additional contact information
Alessandra Andreozzi: University of Pisa
Lorenzo Ricciardi Celsi: ELIS Innovation Hub
Antonella Martini: University of Pisa

A chapter in Digitalization Cases Vol. 2, 2021, pp 19-43 from Springer

Abstract: Abstract (a) Situation faced: Digital transformation in the insurance value chain is fostering the adoption of artificial intelligence, namely, of deep learning methods, for enabling the improvement and the automation of two relevant tasks in the claim management process, i.e., (i) sensitive data detection and anonymization and (ii) manipulation detection on images. The proposed approach is technically feasible, lightweight, and sufficiently scalable due to the properties offered by currently available cloud platforms, and it also yields a sensible reduction in operational costs. (b) Action taken: Since well-established guidelines to address insurance digitalization use-cases requiring deep learning do not yet exist, we propose a customized data science workflow for designing and developing two prototypes that tackle: (i) sensitive data detection and anonymization and (ii) manipulation detection on claim images. We propose a six-step method that is implemented using deep convolutional neural networks in Keras and TensorFlow and is seamlessly integrable with the most frequently used cloud environments. During prototyping, different training and testing iterations were carried out, thus progressively fine-tuning detection models, up to the achievement of the desired performance. (c) Results achieved: The developed prototypes are able to (i) robustly anonymize claim images and (ii) robustly detect manipulations on claim images (robustness means that, from a statistical viewpoint, the declared performance level is preserved even in the presence of highly heterogeneous distributions of the input data). The technical realization relies on open-source software and on the availability of cloud platforms, this last both for training purposes and for scalability issues. This demonstrates the applicability of our methodology, given a reliable analysis of the available resources, including the preparation of an appropriate training dataset for the models. (d) Lessons learned: The present work demonstrates the feasibility of the proposed deep learning-based six-step methodology for image anonymization and manipulation detection purposes and discusses challenges and learnings during implementation. Specifically, key learnings include the importance of business translation, data quality, data preparation, and model training.

Date: 2021
References: Add references at CitEc
Citations:

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:mgmchp:978-3-030-80003-1_2

Ordering information: This item can be ordered from
http://www.springer.com/9783030800031

DOI: 10.1007/978-3-030-80003-1_2

Access Statistics for this chapter

More chapters in Management for Professionals from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-04-01
Handle: RePEc:spr:mgmchp:978-3-030-80003-1_2