A maximum entropy approach to the loss data aggregation problem
Henryk Gzyl,
Erika Gomes-Gonçalves and
Silvia Mayoral
Journal of Operational Risk
Abstract:
ABSTRACT One of the main problems in the advanced measurement approach - determining operational risk regulatory capital - consists of the computation of the distribution of losses when the data is made up of aggregated losses caused by different types of risk events in different business lines. A similar problem appears in the insurance industries when there is a need to aggregate losses of different types. When the data is collected well, that is, when the losses are collected as a joint vector, the maxentropic techniques are quite suitable for finding the probability density of the aggregated loss. When the data is not collected well, the maxentropic procedure provides us with marginal densities, which can then be coupled by means of some appropriate copula, carrying on one of the two procedures that we apply. At any rate, the two possibilities hinge in an essential way on the maxentropic technique to determine a probability density from its Laplace transform. This is due to the fact that such techniques provide us with analytical expressions for the densities, which make many numerical procedures easier to implement. It is the aim of this paper to examine and compare these alternative ways of solving the problem of determining the density of aggregate losses.
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.risk.net/journal-of-operational-risk/2 ... -aggregation-problem (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:rsk:journ3:2450287
Access Statistics for this article
More articles in Journal of Operational Risk from Journal of Operational Risk
Bibliographic data for series maintained by Thomas Paine ().