Model Choice and Value-at-Risk Performance
Chris Brooks and
Gita Persand
Financial Analysts Journal, 2002, vol. 58, issue 5, 87-97
Abstract:
Broad agreement exists among both the investment banking and regulatory communities that the use of internal risk management models is an efficient means for calculating capital risk requirements. The determination of model parameters laid down by the Basle Committee on Banking Supervision as necessary for estimating and evaluating the capital adequacies, however, has received little academic scrutiny. We investigate a number of issues of statistical modeling in the context of determining market-based capital risk requirements. We highlight several potentially serious pitfalls in commonly applied methodologies and conclude that simple methods for calculating value at risk often provide superior performance to complex procedures. Our results thus have important implications for risk managers and market regulators. Broad agreement exists in both the investment banking and regulatory communities that the use of internal risk management models can provide an efficient means for calculating capital risk requirements. The determination of the model parameters necessary for estimating and evaluating the capital adequacies laid down by the Basle Committee on Banking Supervision, however, has received little academic scrutiny.We extended recent research in this area by evaluating the statistical framework proposed by the Basle Committee and by comparing several alternative ways to estimate capital adequacy. The study we report also investigated a number of issues concerning statistical modeling in the context of determining market-based capital risk requirements. We highlight in this article several potentially serious pitfalls in commonly applied methodologies.Using data for 1 January 1980 through 25 March 1999, we calculated value at risk (VAR) for six assets—three for the United Kingdom and three for the United States. The U.K. series consisted of the FTSE All Share Total Return Index, the FTA British Government Bond Index (for bonds of more than 15 years), and the Reuters Commodities Price Index; the U.S. series consisted of the S&P 500 Index, the 90-day T-bill, and a U.S. government bond index (for 10-year bonds). We also constructed two equally weighted portfolios containing these three assets for the United Kingdom and the United States.We used both parametric (equally weighted, exponentially weighted, and generalized autoregressive conditional heteroscedasticity) models and nonparametric models to measure VAR, and we applied a method based on the generalized Pareto distribution, which allowed for the fat-tailed nature of the return distributions. Following the Basle Committee rules, we determined the adequacy of the VAR models by using backtests (i.e., out-of-sample tests), which counted the number of days during the past trading year that the capital charge was insufficient to cover daily trading losses.We found that, although the VAR estimates from the various models appear quite similar, the models produce substantially different results for the numbers of days on which the realized losses exceeded minimum capital risk requirements. We also found that the effect on the performance of the models of using longer runs of data (rather than the single trading year required by the Basle Committee) depends on the model and asset series under consideration. We discovered that a method based on quantile estimation performed considerably better in many instances than simple parametric approaches based on the normal distribution or a more complex parametric approach based on the generalized Pareto distribution. We show that the use of critical values from a normal distribution in conjunction with a parametric approach when the actual data are fat tailed can lead to a substantially less accurate VAR estimate (specifically, a systematic understatement of VAR) than the use of a simple nonparametric approach.Finally, the closer quantiles are to the mean of the distribution, the more accurately they can be estimated. Therefore, if a regulator has the desirable objective of ensuring that virtually all probable losses are covered, using a smaller nominal coverage probability (say, 95 percent instead of 99 percent), combined with a larger multiplier, is preferable. Our results thus have important implications for risk managers and market regulators.
Date: 2002
References: Add references at CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://hdl.handle.net/10.2469/faj.v58.n5.2471 (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:taf:ufajxx:v:58:y:2002:i:5:p:87-97
Ordering information: This journal article can be ordered from
http://www.tandfonline.com/pricing/journal/ufaj20
DOI: 10.2469/faj.v58.n5.2471
Access Statistics for this article
Financial Analysts Journal is currently edited by Maryann Dupes
More articles in Financial Analysts Journal from Taylor & Francis Journals
Bibliographic data for series maintained by Chris Longhurst ().