Modeling very large losses
Henryk Gzyl
Journal of Operational Risk
Abstract:
In this paper, we present a simple probabilistic model for aggregating very large losses into a loss collection. ;This supposes that “standard†;losses come in various possible sizes – small, moderate and large – which, fortunately, seem to occur with decreasing frequency. Standard modeling allows us to infer a probability distribution describing their occurrence. From the historical record, we know that very large losses do occur, albeit very rarely, yet they are not usually included in the available data sets. Such losses should be made part of the distribution ;for computation purposes. For example, to a bank they may helpful in the computation of economic or regulatory capital, while to an insurance company they may be useful in the computation of premiums of losses due to catastrophic events. We develop a simple modeling procedure that allows us to include ;very large losses in a loss distribution ;obtained from moderately sized loss data. We say that a loss is large when it is larger than the value-at-risk (VaR) at a high confidence level. The original and extended distributions ;will have the same VaR but quite different values of tail VaR (TVaR).
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.risk.net/journal-of-operational-risk/5 ... ng-very-large-losses (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:rsk:journ3:5563531
Access Statistics for this article
More articles in Journal of Operational Risk from Journal of Operational Risk
Bibliographic data for series maintained by Thomas Paine ().