EconPapers    
Economics at your fingertips  
 

Fitting Censored and Truncated Regression Data Using the Mixture of Experts Models

Tsz Chai Fung, Andrei L. Badescu and X. Sheldon Lin

North American Actuarial Journal, 2022, vol. 26, issue 4, 496-520

Abstract: The logit-weighted reduced mixture of experts model (LRMoE) is a flexible yet analytically tractable non-linear regression model. Though it has shown usefulness in modeling insurance loss frequencies and severities, model calibration becomes challenging when censored and truncated data are involved, which is common in actuarial practice. In this article, we present an extended expectation–conditional maximization (ECM) algorithm that efficiently fits the LRMoE to random censored and random truncated regression data. The effectiveness of the proposed algorithm is empirically examined through a simulation study. Using real automobile insurance data sets, the usefulness and importance of the proposed algorithm are demonstrated through two actuarial applications: individual claim reserving and deductible ratemaking.

Date: 2022
References: Add references at CitEc
Citations: View citations in EconPapers (4)

Downloads: (external link)
http://hdl.handle.net/10.1080/10920277.2021.2013896 (text/html)
Access to full text is restricted to subscribers.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:taf:uaajxx:v:26:y:2022:i:4:p:496-520

Ordering information: This journal article can be ordered from
http://www.tandfonline.com/pricing/journal/uaaj20

DOI: 10.1080/10920277.2021.2013896

Access Statistics for this article

North American Actuarial Journal is currently edited by Kathryn Baker

More articles in North American Actuarial Journal from Taylor & Francis Journals
Bibliographic data for series maintained by Chris Longhurst ().

 
Page updated 2025-03-20
Handle: RePEc:taf:uaajxx:v:26:y:2022:i:4:p:496-520