Calibration of credit default probabilities in discrete default intensity and logit models
Anand Deo,
Sandeep Juneja and
Aakash Kalyani
Additional contact information
Anand Deo: Tata Institute of Fundamental Research
Sandeep Juneja: Centre for Advanced Financial Research and Learning (CAFRAL)
Working Papers from Centre for Advanced Financial Research and Learning (CAFRAL)
Abstract:
Discrete default intensity based or logit type models are commonly used as reduced form models for conditional default probabilities for corporate loans where this default probability depends upon macroeconomic as well as firm-specific covariates. Typically, maximum likelihood (ML) methods are used to estimate the parameters associated with these models. Since defaults are rare, a large amount of data is needed for this estimation resulting in a computationally time consuming optimization. In this paper, we observe that since the defaults are typically rare, say, on average 1 − 2% per annum, under the Gaussian assumption on covariates (which may be achieved via transforming them), the first order equations from ML estimation suggest a simple, accurate and intuitively appealing closed form estimator of the underlying parameters. To gain further insights, we analyze the properties of the proposed estimator as well as the ML estimator in a statistical asymptotic regime where the conditional probabilities decrease to zero, the number of firms as well as the data availability time period increases to infinity. The covariates are assumed to evolve as a stationary Gaussian process. We characterize the dependence of the mean square error of the estimator on the number of firms as well as time period of available data. Our conclusion, validated by numerical analysis, is that when the underlying model is correctly specified, the proposed estimator is typically similar or only slightly worse than the ML estimator. Importantly however, since usually any model is misspecified due to hidden factor(s), then both the proposed and the ML estimator are equally good or equally bad! Further, in this setting, beyond a point, both are more-or-less insensitive to increase in data, in number of firms and in time periods of available data. This suggests that gathering excessive expensive data may add little value to model calibration. The proposed approximations should also have applications outside finance where logit type models are used and probabilities of interest are small.
Pages: 39
Date: 2017-06
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.cafral.org.in/sfControl/content/Speech ... k_models_Updated.pdf
Our link check indicates that this URL is bad, the error code is: 500 Can't connect to www.cafral.org.in:443 (A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:ris:cafral:022330
Access Statistics for this paper
More papers in Working Papers from Centre for Advanced Financial Research and Learning (CAFRAL) Contact information at EDIRC.
Bibliographic data for series maintained by Vijayshree ().