EconPapers    
Economics at your fingertips  
 

Application of Risk Theory to Interpretation of Stochastic Cash-Flow-Testing Results

Edward Robbins, Samuel Cox and Richard Phillips

North American Actuarial Journal, 1997, vol. 1, issue 2, 85-98

Abstract: This paper offers practical guidance to actuaries who are seeking ways to evaluate and manage the output from the stochastic cash-flow-testing process. A commonly expressed opinion about the stochastic approach is that almost all the results are successes, whereas the adverse scenarios are arguably the ones of major interest. This paper responds to the following question: “Given that I have run a large number of stochastic cash-flow-testing scenarios resulting in only a very small number of scenarios landing in the adverse area, or “ruin tail,” how can I use the results of the entire set of observations to better estimate the area under the ruin tail?We begin the paper with a discussion of the types of variables that could be investigated by using the output from typical simulation models. The choice of variable worth examining appears flexible and could include the accumulated surplus at the end of the time horizon of the scenario, the present value of the accumulated surplus discounted to the beginning of the time horizon, or the lowest risk-based capital (RBC) multiple realized during the time horizon. We use the present value of accumulated surplus in this study.Once the variable of choice has been decided, we illustrate various methods from risk theory that could be used to investigate the variable of choice. All the methods we discuss are tools readily available to the actuary, originally developed as part of traditional risk theory. To illustrate these methods, we use output from a simulation model valuing a portfolio of single-premium deferred annuities under various stochastic interest rate scenarios. We review each technique and then illustrate how to adapt them for a specific purpose. In particular, we discuss parametric model selection for standard families based on maximum likelihood estimation (MLE), mixtures of standard models, Esscher approximations, and the normal power method. Our work shows that parametric models selected via MLE have several advantages over the classical methods such as Esscher and normal power. Parametric models fit the entire distribution, whereas the classical methods give only point estimates. Also, the statistical theory of MLE estimation is well-understood and, for example, allows the actuary to calculate such useful statistics as confidence intervals. Simple mixtures of familiar two-parameter models are also discussed because they are easy to fit using moment estimators. Their main drawback, however, is that the large number of parameters to be estimated can make them difficult to work with. The classical methods are shown to be harder to use and do not give better results than fitting the parametric models.We also address the issue of sample size.

Date: 1997
References: Add references at CitEc
Citations:

Downloads: (external link)
http://hdl.handle.net/10.1080/10920277.1997.10595614 (text/html)
Access to full text is restricted to subscribers.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:taf:uaajxx:v:1:y:1997:i:2:p:85-98

Ordering information: This journal article can be ordered from
http://www.tandfonline.com/pricing/journal/uaaj20

DOI: 10.1080/10920277.1997.10595614

Access Statistics for this article

North American Actuarial Journal is currently edited by Kathryn Baker

More articles in North American Actuarial Journal from Taylor & Francis Journals
Bibliographic data for series maintained by Chris Longhurst ().

 
Page updated 2025-03-20
Handle: RePEc:taf:uaajxx:v:1:y:1997:i:2:p:85-98