ICRFS™ Training videos
Videos marked with an (*) contain discussion of new content in the latest ICRFS™ release.
If for any reason you are unable to view the training or demonstration videos, please contact our support staff at This email address is being protected from spambots. You need JavaScript enabled to view it..
The training videos should be used for hands on training. We suggest you run the videos on a separate computer using a data projector, and train as a group.
The only way you will learn all the new concepts and be able to exploit all the immense benefits is by using the system. Experiential learning is imperative.
It is important that you study the videos in sequential order as set out below.
Table of Contents
- 1.0 Introduction
- 2.0 LRT and ELRF
- 3.0 Introduction to PTF
- 4.0 Case Study: CTP
- 5.0 Heteroscedasticity
- 6.0 Case Study: ABC
- 7.0 Importing and COM
- 8.0 PTF case studies
- 9.0 Layers and PALD
- 10.0 Introduction to MPTF
- 11.0 MPTF Concepts
- 12.0 Capital Management
- 13.0 Solvency II
- 14.0 Other applications
- 15.0 Bootstrap
- 16.0 Updates
- 1.0 Introduction
- 2.0 LRT and ELRF
- 3.0 Introduction to PTF
- 4.0 Case Study: CTP
- 5.0 Heteroscedasticity
- 6.0 Case Study: ABC
- 7.0 Importing and COM
- 8.0 PTF case studies
- 9.0 Layers and PALD
- 10.0 Introduction to MPTF
- 11.0 MPTF Concepts
- 12.0 Capital Management
- 13.0 Solvency II
- 14.0 Other applications
- 15.0 Bootstrap
- 16.0 Updates
1.0 Introduction
1. Introduction to ICRFS™
1.1 Introduction to ICRFS™ database structures
The database functionality and navigation is studied. Data, models and links to reports all reside in one relational database. A database can either be remote (on a server that is shareable) or local. Communication between two databases has the same intuitive feel as using Windows Explorer for communicating between two sub-folders. The objects in the database are Triangle Groups (TG) and Composite Triangle Groups (CTG). TGs (and CTGs) also contain objects, namely, triangles, exposures, premiums, data sets, models and links to reports.
This video demonstrates:
- database manipulation
- database structure
- triangle group structure
- using variables and values to filter triangle groups
- system navigation
- creation of new databases and communication between two databases
- new triangle types
1.2 Overview of ICRFS™ modeling frameworks
In this video, a brief overview of the modeling frameworks included in ICRFS™ is presented.
These frameworks include:
- Link Ratio Techniques (LRT) including Bornhuetter-Ferguson
- Extended Link Ratio Family (ELRF) as discussed in the paper "Best Estimate for Reserves"
- Probabilistic Trend Family (PTF) modeling framework
- Multiple Probabilistic Trend Family (MPTF) modeling framework
There is a paradigm shift between link ratio techniques (LRT) and the probabilistic modeling frameworks PTF and MPTF. The ELRF modeling framework provides the bridge between the two frameworks.
An identified model in the PTF modeling framework gives a succinct description of the volatility in the data. The description of the volatility is represented by four graphs, which tell a story about the data.
Benefits of the ICRFS™ software package include:
- A user configured, easily navigated database
- The database is a repository for the data, models, and forecast scenarios
- Uneven sampling periods: for example, Accident year reserves versus quarterly evaluations.
- Models are saved in the triangle groups
- Monitoring and updating every review period is a seamless operation
- Diagnostics for existing link ratio methods
- Pricing both retrospective and prospective reinsurance structures
- Pricing for different limits for different years
- Future accident (underwriting) period segmentation pricing
- Understandable probabilistic models summarised by four interrelated pictures.
- Correlations (all three types: process, parameter, and reserve) and trends are measured from the data.
- Economic Capital: risk charges for combined reserve and underwriting risks. (Note there is usually additional diversification credit obtained for the combined risk charge on reserves and underwriting).
- modeling wizard
- Reinsurance evaluation
modeling multiple triangles simultaneously in the MPTF module has additional applications and benefits including risk diversification analysis, capital allocation analysis, credibility modeling, and many other applications as seen in subsequent chapters.
1.3 Uncertainty and Variability
Variability and uncertainty are two distinct concepts and cannot be used interchangeably. Variability is an observed phenomenon that is to be measured and where appropriate explained. Uncertainty, on the other hand, refers to knowledge about variability.
This is easiest to explain by way of example. We demonstrate by comparing two games of chance where the parameters of the games are known. In this case, we have no uncertainty in our knowledge about either game. We 'know' the mean, standard deviation, and indeed the probabilities of all the outcomes.
Parameter uncertainty leads to uncertainty in the variability of the process- our knowledge about the variability is uncertain. The inherent variability (process variance) cannot be reduced.
1.4 Manual creation of Triangle Groups
In this video, manual creation of a triangle group is illustrated. Although triangle groups are usually created via an importing macro, it can be useful to create triangle groups and triangle manually for small projects.
Creating triangle types and triangles are also demonstrated along with transferring data and models between similarly sized triangle groups.
2.0 LRT and ELRF
2. modeling using the Link Ratio Techniques and Extended Link Ratio Family modules
2.1 Introduction to the Link Ratio Techniques module and the Extended Link Ratio Family module
In this video, the Link Ratio Techniques (LRT) module is discussed followed by an introduction to the Extended Link Ratio Family (ELRF) module. Commonly used navigation techniques are also demonstrated as part of the introductory video.
Each tab in the LRT results display is discussed and linked back to the underlying data. Methods of selecting ratio sets are shown. The flexibility in ratio selection is demonstrated; individual ratios can be modified if required.
Two smoothing algorithms for link ratio methods are available: two parameter smoothing and three parameter smoothing. Smoothing routines can be applied to a method (eg: volume weighted average) or to a subset of ratios within a method.
Forecast results include:
- A completed triangle table for incremental and cumulative arrays.
- Bornhuetter-Ferguson and Expected Loss Ratio forecasts (a premium vector needs to be associated with the dataset for this output to be meaningful)
- Forecast Summary results including forecasted Calendar period payment stream
The connection from the Link Ratio Techniques (LRT) to the Extended Link Ratio Family (ELRF) is outlined.
- Every link ratio can be treated as the slope of a line or a trend.
- A 'weighted average ratio' can be treated as a 'weighted average trend'.
- The calculation of the weighted average trend can be done using a regression analysis through the origin.
- Regression estimators of trends are equivalent to a weighted average link ratio for the same set of data points.
2.2 ALRT: The Aggregate LRT interface
Starting from saved models in LRT, we show how to create a Composite Cumulative dataset and open this in ALRT.
ALRT functions as a frame in which a number of simultaneous instances of LRT can be run. Each instance is under a separate tab and contains a complete LRT interface.
Forecasting in ALRT adds a tab with aggregate tables over all the forecasts. In cases where IL datasets are modelled in one or more of the LRT instances, ALRT checks compatibility of the data types before aggregating the tables. This is illustrated and explained.
2.3 The Mack Method and ELRF
In this video we continue with a discussion of the Mack Method. The Mack triangle group is used for this discussion as found in the Workbook databases provided with ICRFS™.
The default model in the ELRF modeling framework is Volume-Weighted-Average (Chain Ladder) link ratios formulated as regression estimators through the origin. This method is called the Mack method.
Assumptions made by the Mack method include:
- For a given value of X (cumulative development period) the next cumulative development period (Y) lies on the hypothetical 'average trend line', that is average link ratio line.
- Variance assumption: variance of Y (about the average link ratio line) is proportional to X.
Regression estimators through the origin in ELRF are shown to be the same as the equivalent weighted average link ratios in LRT.
The connection between link ratios and regression estimators is reiterated in terms of the ELRF displays. Residuals are regarded as the difference between the trends in the data and the trends estimated by the method. The connection between residuals, link ratios, and the data are explained.
It is shown that for the Mack data that the Mack method overfits the large values and underfits the low values. This is indicated by the trend downward in the residuals versus fitted values. The over fitting is a result of that between every two consecutive development years the regression requires a positive intercept.
The Murphy method is an extension of the Mack method with that includes an intercept. From the residuals versus fitted values and the other displays, the Murphy method gives better results than the Mack results.
2.4 Other models in the ELRF modeling framework
In the previous video, one extension to the calculation of average link ratios was considered (the addition of an intercept in the regression equations due to Murphy). In this video, this example is returned to in more detail.
The incremental version (Venter) of Murphy's equation is to be preferred as only the incremental value is being predicted since the cumulative component is already known.
If the link ratio-1 in Venter's formulation is not significantly different from zero (ie, the link ratio is 1), then the cumulatives are not predictive of the next column of incrementals. That is, there is no correlation between the incrementals in one development year and the cumulative in the previous development year. In this case, a better way of projecting the incremental in the development period is by taking the average of the incrementals instead of using the previous cumulative.
What happens if there is a trend in the incrementals going down the accident years? It is shown that usually if there is a trend, the inclusion of the trend estimate will have more predictive power than the average link ratios. Note: trends are more helpful for prediction than link ratios. This feature of the data leads us naturally to the formulation of models which incorporate trends directly rather than link ratios - ie the Probabilistic Trend Family!
3.0 Introduction to PTF
3. Introduction to the Probabilistic Trend Family modeling framework
3.1 Introduction to the Probabilistic Trend Family modeling framework, the three directions, and trends
The Probabilistic Trend Family (PTF) modeling framework is presented. This is not a method, there is not one model applied to all data, rather a model is identified which is appropriate for the data. That is, the model describes the trend structure by development period, accident period, and calendar period along with the quality of the volatility about the trend structure. The volatility about the trend structure is called process variability. It is an integral part of the model.
The PTF modeling framework relates all the cells in the triangle in respect of trends. By contrast, in the LRT and ELRF frameworks, each accident period and development period are treated as a separate problem. However, trends do not only occur in accident and development time, they also occur in calendar time. The most important direction for projection in a loss development array is the calendar direction.
The properties of calendar trends and their projection into accident and calendar time are discussed. These properties are true for every real triangle.
3.2 modeling M3IR5 in the PTF Framework
In this video, the TG M3IR5 is modelled. The data in the triangle was simulated according to the trend structure shown in the previous video. That is, one development period trend and three calendar period trends.
The default model in PTF estimates one calendar period trend. Residuals represent the trend in the data minus the trend estimated by the method. Accordingly, the residuals of the default model will indicate three trends in the data.
The diagnostic tools, model display, forecast scenario options, and forecast output are discussed.
Forecast results include:
- Forecast distributions for every cell in the triangle
- Forecast distributions for the aggregate of the cells including the total reserve distribution
- Calendar payment stream (critical for Enterprise Risk Management)
4.0 Case Study: CTP
4. modeling real data (CTP) in the PTF modeling framework
4.1 modeling CTP in PTF manually
In this video, we model the triangle group CTP using the PTF modeling framework. The method of evaluating whether parameters are needed and the method of adding parameters is illustrated. In general, we model development periods first, then whichever direction (accident or calendar) shows the most changes (from left to right).
The methods of modeling: adding trends, detecting outliers, maintaining the normality assumption, and optimisation are covered.
The statistically optimal model has the lowest (Bayes Information Criterion) BIC.
The process of validation is discussed. If the trends in the data are stable, then we expect to find that the model does a well for predicting the most recent years if they are removed from the model estimation. This is shown to be the case for this data.
4.2 modeling CTP using the Wizard and ELRF
In this video we study the wizard and how it models the triangle group CTP (dataset PL(I)). The wizard is simply a set of commands - similar to a macro. As the wizard builds a model, it evaluates the model structure and, depending on the structure identified, it will consider different alternative models.
In general, the wizard does a good job for modeling data. However, it is always necessary to evaluate a wizard model. That is, you need to determine which wizard model is best, to view outliers, and to check any trends set to zero. If you are not happy with any suggested wizard model, it is always possible to either adjust the wizard model or to build a model manually.
The functionality of PTF as available on the toolbar is also revised.
CTP is briefly examined in the ELRF modeling framework to determine whether ratios have any predictive power.
5.0 Heteroscedasticity
5. TG CS5: heteroscedasticity and varying parameters
5.1 CS5 and Heteroscedasticity
In this video, we introduce varying process variance (heteroscedascity) by development lag within the context of the PTF modeling framework. This is easiest to understand in terms of changes of percentage variability. Generally small payments vary more on a percentage scale than large payments.
We illustrate first using manual hetero analysis. In subsequent videos, we use automatic hetero adjustment.
The modeling of process variance is the fourth component of the PTF modeling analysis. Process variation is a common feature in all real data. This component of the model is just as critical as the changes in trends.
5.2 CS5 and varying parameters
In this study, we examine varying parameters. This concept of varying parameters is close to the idea of exponential smoothing and credibility adjustment.
The model parameters window is discussed along with the relationship between this window and the model graphs (residual and model display).
Basically, for full parameters we give maximum weight to the new observations and zero weight to previous observations (they do not have any relevance to our new trend/level). With varying parameters, information from previous observations also is included when estimating the new 'level' thus we 'smooth' the changes between parameters.
The option reinstate all alpha parameters is also discussed.
6.0 Case Study: ABC
6. TG ABC: modeling wizard, simulations, and release of capital as profit
6.1 Manual modeling, constraints, model templates
In this video, we discuss parameter constraints in the context of the triangle group ABC. Model templates are also revised.
modeling is done from first principles as video 4.1. Parameters are added according to trend (or level) changes.
Parameter constraints can be set up manually or automatically through the optimise routine (assuming constraints are permitted in modeling preferences - the default). Removing of constraints is also demonstrated.
6.2 Wizard, simulations, release of capital as profit, and more
The Separation Method (SM), a model template, in the PTF modeling framework, is used to compare two accident year exposure vectors.
Constraints (between non contiguous parameters) are briefly revisited. Optimization options are demonstrated with the different constraint options. The purpose of constraints and conditions where constraints apply are also discussed. Constraints are most often found for accident levels (for example, to account for unusual events) or in calendar periods (to revert back to previous trends after a legislative change, for example).
modeling in the ELRF modeling framework (including Mack) is discussed and compared with the PTF modeling framework. In the ELRF modeling framework we do not have any control future calendar year trend assumptions, nor do we know what trends have been measured by the model.
Triangles are simulated from the optimal PTF model. It is almost impossible to distinguish between the simulated triangles and the real data. The forecasts (total reserve distribution) are statistically the same for the simulated triangles and the real data.
Future scenario formulation is illustrated. Two forecast scenarios are compared to assess the release of capital as profit at the end of the next calendar year, if the more conservative scenario does not play out for the next year.
The TG ABC is updated (expanded) to include the next calendar period. Models are also updated and are effortlessly monitored.
6.3 Reserve upgrades and conditional statistics
We discuss reserve upgrades and conditions under which estimates of prior accident year ultimates remain consistent when updating, that is, adding another calendar period.
Two new conditional statistics are introduced in the FS (forecast summary) displays, namely, the standard deviation of the mean ultimates conditional on next calendar periods data (+_Ult|Data), and the mean of the standard deviation of the ultimate conditional on next calendar periods data (St. Dev|Data).
The above mentioned statistics are relevant to a one year horizon view of the balance sheet of a company.
6.4 modeling wizard revisited in the context of CS5
The modeling wizard is revisited using TG CS5. The models the wizard produces are compared and tested. If you get the wizard model sequence: M1 M4 M6 (note that M4 is the same as M6), usually M4=M1 and if they are not the same M4 is most often better than M1. Wizard generated models must always be tested.
The models produced by the wizard for CS5 are evaluated and compared.
It is very important to critically evaluate any zero trends (iotas) set by the wizard in the calendar period direction.
7.0 Importing and COM
7. Importing of data from other applications and COM Automation
7.1 Importing triangular data from other databases and manipulating objects in the ICRFS™ database
This chapter illustrates the simplicity of using COM automation to import triangular data from Excel spreadsheets into an ICRFS™ database. COM scripts are also used to manipulate objects in the database and run the modeling Wizard. The data can be imported from any other database including MS Access, Oracle and SQL Server.
7.2 Importing triangular data from unit record transactional data
This chapter illustrates the importing of triangular data from unit record transactional data into an ICRFS™ database using COM scripts. The example involves unit record data that reside in an MS Access database, but similar COM script can be written if the data reside in Excel, Oracle or SQL Server.
A COM script is also used to manipulate objects in the ICRFS™ database. Amongst other object manipulations, Quarter*Quarter arrays are collapsed to year*year. We collapse quarter*quarter to accident year by quarter development (and) calendar periods.
The COM script is shown to be flexible and different layers can be extracted from the unit record transactional data. It is easy to add, remove, or modify queries.
8.0 PTF case studies
8. Further PTF modeling examples
8.1 CompA and WCom
The first triangle group we examine is CompA. This data is quite volatile on the percentage scale. A good model will measure the volatility in the data and will project the volatility into the future.
Topics covered:
- Diagnostics
- Autovalidation
- High process variability
- Forecasts
- Pricing
- Modeling Closed Claim Counts
The ultimates for the forecasts are conditional on the previously observed data. There is no need to smooth the ultimates or use Bornhuetter-Ferguson. The ultimates are the true ultimates conditional on the data.
The method of clearing process variance changes is also shown. This is useful when you have 'patterns' found in the hetero which you do not want to project into the future.
The WCom triangle group includes a significant inflation vector based on the known economic inflation in the reporting period. We compare models and forecasts with and without the associated inflation, and show there is no need to adjust for inflation with inflation vectors unless you need to know the inflation after economic inflation. That is, you want to measure social or superimposed inflation.
8.2 SDF and more on WCom
SDF is a simulated dataset with a single development factor and constant high volatility. We examine this in PTF and in ELRF. In the course of this we see how high volatility can misdirect judgment about model fit. We forecast with a future accident year and use this to model the Loss Ratios. We see how high volatility in the data leads to high volatility in the loss ratios. The ratio methods in ELRF do not capture the structure of this data, since there is no correlation between incrementals and prior cumulatives. We see how this shows up in the diagnostic plots and forecasts.
Returning to Wcom we look at ways to manually modify a wizard model by introducing constraints. This approach is important in enabling the user to incorporate different ideas about the drivers underlying the data.
8.3 Case Study LR High
This video is on assessing the validity of ELRF type models, including the Mack method and illustrating the power of the PTF modeling framework in modeling the paid losses, the Case Reserve Estimates, and the number of claims closed.
We also illustrate how much capital can be released as profit if a less conservative scenario plays out for the next year than that assumed in a conservative forecast scenario.
We begin by analyzing PL(C) in the triangle group LR High using the Mack method in the ELRF modeling framework. Examination of the calendar year residuals reveals a strong negative trend. Trends in the residuals represent the trends in the data minus the trends estimated by the model. This means that for this dataset the Mack method estimates a calendar year trend much larger than the trend in the data.
By including trends and giving zero weight to the early calendar years we design an ELRF model that estimates calendar year trends more in keeping with what is in the data. The resulting mean reserve (as expected) is much lower than that given by the Mack method.
Over estimation of a calendar year trend is illustrated in the PTF modeling framework by simulating a triangle assuming a 10% calendar year trend and setting the trend in the model to be 20%.
Future calendar year trend assumptions can have a major impact on forecast reserve distributions. One of the key differences between the PTF and ELRF modeling frameworks is that in the PTF modeling framework the trend structure and volatility about the trend structure are quantified and the actuary has control on forecast assumptions going forward.
Returning to LR High we look at the identified PTF models for Paid Losses, Case Reserve Estimates (CRE) and the Number of Claims Closed (NCC).
This leads to a discussion of the issues involved in forming a plausible forecasting scenario for the paid losses, incorporating the information from the CRE and NCC models. We also create a forecast scenario to match the Mack method (volume weighted averages) forecasts. The scenario is completely untenable!
Illustration of capital released as profit is also included.
8.4 Accident year heteroscedasticity
This video begins with a discussion of accident year hetero and moves on to issues of symmetry in modeling.
First a caveat. If there are significant differences in variance across two or more blocks of accident years it is likely that other trends also differ in these blocks and so the best way to model them is to use subdivided triangles in MPTF.
ICRFS™ has the capacity to model accident year hetero in just the same way as development year hetero, except that there is no automatic hetero option.
We illustrate the consistency of accident year hetero with development year hetero by modeling a triangle and its transpose.
In triangle group ABC we show how to transpose a triangle so that the development and accident year dimensions are interchanged.
We show that the transposition is preserved if the models are transposed, or indeed if a model such as Statistical Chain Ladder is applied which is symmetric in development and accident directions.
In this case accident year hetero applied to the original triangle corresponds to development year hetero applied to the transposed triangle. We show how to do this and also which tables and graphs enable you to see exactly what hetero has been applied. The forecasts by calendar year are identical for these two transposed cases.
We consider the same situation in the case of link ratio based methods and demonstrate by a simple argument that the forecast values from a volume-weighted average ratio method are identical regardless of whether the ratios are calculated column-wise or row-wise. Hence Mack models predict exactly the same means for the original and the transposed triangle.
The Mack standard deviations however do not share in this symmetry. This fact is explained by the implicit conditioning on the values in the first column in the standard column-wise method. This break in symmetry indicates a flaw in the Mack methodology for computing standard deviations.
8.5 Comparing Even and Uneven Triangle Groups
In this video, the difference in data view and forecast output for even and uneven triangle groups is illustrated.
9.0 Layers and PALD
9. Layers and the PALD Module
In this video we are studying layers and the predictive aggregate loss distribution (PALD) module.
The Triangle Groups: All 1M, All 1Mxs1M and All 2M are three layers. The structure of the models is very similar in all three layers. Note that the calendar trend in the intermediate layer: 1Mxs1M is not significant.
An identified PTF model predicts log normal distributions for each cell and their correlations. There is no analytical distribution of the sum of log normals. In order to determine the distribution of an aggregate of log normals we need to simulate from each log normal including their correlations. The PALD module provides the facility to conduct simulations in order to obtain distributions of aggregates for accident periods, calendar years and the total reserve. This output can then be used to compute percentiles and VaR tables.
The Reinsurance module is also covered. This module allows the evaluation of varying attachment points with the resultant expected payouts for the Insurer or Reinsurer (compared to no cover).
The output tables from these PALD and Reinsurance modules are explained.
The PALD results are run for the reserve distribution as well as for future pricing years.
It is important to include parameter uncertainty in the forecasting scenarios. Parameters with the same mean but different standard deviations are not the same forecast scenario.
10.0 Introduction to MPTF
10. Introduction to MPTF
10.1 LOB 1 and LOB 3 (part 1)
In this video we introduce the MPTF (Multiple PTF) modeling framework. This modeling framework has a number of applications including:
- Multiple lines of business - diversification and a company wide picture
- Multiple segments
- Layers
- Credibility modeling
- Splitting triangles: change of mix of business or change in trend (or process variance) structure.
A single composite model can be designed for multiple LOBs that measures the volatility in each LOB (trend structure plus process variance(s)) and the different types of correlations between them. Output includes capital allocation by LOB and calendar year based on selected VaRs and T-VaRs.
The concepts of correlations and linearity are clarified. These concepts are fundamental to the MPTF modeling framework. Some misconceptions about correlations are discussed.
The identified PTF models for two TGs LOB 1 and LOB 3 are run and discussed.
10.2 LOB 1 and LOB 3 (part 2)
In this second part of the introduction to the MPTF modeling framework we begin by creating a composite triangle group (CTG) of the basic TGs, LOB 1 and LOB 3, as discussed in the previous training video. The objects contained in CTG are detailed.
The starting point for the MPTF modeling framework is the identified PTF model for each LOB. These are found under the related button in the CTG Models tab and can be loaded directly into MPTF.
New buttons covered include:
- Oc - Optimisation of Residual Correlations
- Co - Correlations and Covariances
- Oi - Optimisation (individual dataset) one step
- OOi - Optimisation (individual dataset) complete
The relationship between parameter correlation and future forecast scenarios is illustrated. The removal of parameter constraints between datasets is explained.
The forecast scenario summary information is presented.
The variance Capital allocation formula is shown along with the corresponding output in the forecast summary tab.
The PALD simulations for the aggregate loss distributions are compared with zero correlation (assuming independence between lines) and the model including correlations (optimal). In this scenario the two lines are highly correlated, including in the reserve correlation, and there is no credit for diversification - a much greater capital risk charge is required due to the reserve correlation.
10.3 LOB 1 and LOB 3 (part 3)
The information in the forecast tables (F button) for the aggregate of the two LOBs, and the forecast summary (FS button) tables is illustrated.
The PALD simulations for the aggregate loss distributions are compared with zero correlation (assuming independence between the two lines) and the model including correlations (optimal). The two lines are highly correlated and there is little credit for risk diversification - a much greater capital risk charge is required due to the reserve distribution correlation.
Capital allocation by LOB based on selected VaRs and T-VaRs is also discussed.
11.0 MPTF Concepts
11. Clusters and MPTF Concepts
11.1 MPTF Clusters Case Study Segments Seg 1, 2, ... 10
In this video we demonstrate the method of creating clusters. For this example, we use the composite triangle group: SEGx10 that comprises 10 segments of an LOB.
Process correlation between any two members in a cluster is significant whereas inter cluster correlations are insignificant and are accordingly set to zero.
There are two main ways of automatically creating clusters. You can retain all significant correlations (which might include some insignificant correlations) or you can remove all insignificant correlations (which may remove some significant correlations).
We analyze the scenario with removing all insignificant correlations.
Simultaneous, clusters, independent modeling and propagation of an action (add parameter, for example) to a dataset is discussed.
We also simulate a composite dataset from the optimal composite model. Trend structures, process variability and forecasts are then compared and are shown to be very similar. That is, the composite simulated dataset and the real composite dataset have similar volatility and correlations.
11.2 MPTF modeling Principles Using 1M 1Mxs1M and 2M
In this video we continue with a discussion of the MPTF modeling framework in the context of the CDS 1M 2M 1Mxs1M. Process correlations between these layers are very high.
Functionality discussed in this video includes:
- Model Templates
- Loading models (including related models)
- Replicate models
- Simultaneous, clusters, independent modeling and testing
- Constraints
- Optimization buttons
- Hetero
- Logbook (as per PTF, but with aggregate statistics)
MPTF Options in the modeling preferences are set.
The typical example of optimizing a MPTF model for layers is illustrated by example.
It is found that the reserve distribution of (limited to) 1M and (limited to) 2M have the same CV!
The risk capital as a % of the mean reserve based on VaR or T-VaR is the same for (limited to) 1M and (limited to) 2M.
12.0 Capital Management
12. Capital Management of long-tail liabilities
12.1 Common drivers and Process Correlation: TG GrossVsNet
How do we know if two LOBs have common drivers?
Gross and net of reinsurance data have very much the same trend structure and high process correlation because they have common drivers (especially in the calendar year direction). This is illustrated with a real life example.
However, it is very rare to see two LOBs with similar trend structure or significant process correlation. That is, in general, LOBs do not have common drivers.
12.2 Design of a composite model for Company B and allocation of risk capital
A composite model is designed for all the long tail LOBs of Company B. The starting point is the optimal model designed (identified) for each LOB in the (PTF) modeling framework and the associated forecast scenarios.
Clusters are identified based on process correlations between the LOBs.
We find that most LOBs do not have significant process correlation. Moreover, LOBs do not share the same trend structure and process variance. Accordingly they do not have common drivers.
Forecast summaries include:
- Reserve distribution correlations
- Risk capital allocation (percentages) by LOB
- Payment streams by calendar year for each LOB and the aggregate
- Risk capital allocation (percentages) by calendar year for each LOB and the aggregate
Risk Capital allocation by LOB and calendar year is based on a variance/covariance formula.
Calendar year payment streams are critical for the cost of capital calculations. These payment streams are inseparably related to the interaction between of the development period and calendar period parameters. (An accurate projection of the calendar year streams cannot be obtained in any other way).
The volatility of ReA is examined. The process volatility in the past is high. We expect to see the same volatility when we project into the future.
12.3 VaR, T-VaR, Risk Capital Allocation, and Underwriting risk charge versus Reserve risk charge
Forecast scenarios going forward are adjusted so that estimates of the means of the reserve distributions correspond to reserves held. For most LOBs they are quite conservative. The CV of the aggregate is smaller than the CVs for most of the individual lines.
Since there is no analytical form for the sum of log normals, we simulate from the projected correlated log normals in each cell for each LOB to find the distribution of aggregates.
Graphs of risk capital allocation for selected VaRs and T-VaRs are discussed.
It is shown that the combined risk charge for reserve and underwriting risk is less than the sum of the individual risk charges.
12.4 Company B LOB ReA versus the aggregate of all LOBs and simulation of a composite dataset from the composite model
The LOB ReA has the largest CV and if this were the only line written by Company B it would require a large amount of risk capital. Company B affords substantial risk diversification credit as a result of the reserve distributions exhibiting essentially zero correlation.
A composite dataset is simulated from the composite model for all the long tail LOBs. It is shown to have the same risk characteristics as the real data.
12.5 Excel based Report for Company B for all LOBs (Company Wide Report)
A report is generated in Excel for Company B using a report template that exhibits reserve summaries for the aggregate across all LOBs and each LOB by accident year and calendar year. Capital allocation tables and graphs are also given for specific VaR.
One great benefit of an ICRFS database is that all reports are linked to the corresponding triangle groups and accordingly a report can be found with just a few mouse clicks.
12.6 Introduction, Forecast combinations, and the wizard
In this video we discuss various functionality including:
The wizard can now be run on multiple triangle groups in batch mode.
Forecast combinations allow the construction of forecasts consisting of linear combinations of input datasets (including negatives). By way of illustration, we show gross minus deductibles to obtain net forecasts. This technique is useful for analysis of net data where there are negatives as a result of recoveries or subrogations. Similarly for analysising excess layers (2M - 1M = 1Mxs1M for instance).
Forecast settings are now included graphically in the forecast summary window. This important update provides an easy way to compare trends in the model with future forecast assumptions.
13.0 Solvency II
13. Solvency II one year and ultimate year risk horizons and IFRS 4 metrics including fungibility and ring fencing: SCR, Best Estimate of Liabilities (BEL), Technical Provisions (TP) (Fair Value of Liabilities), and Market Value Margins (MVM) (Risk Margins) for the aggregate of long-tail LOBs
In the following seven videos, we present a mathematically tractable solution to the Solvency II risk metrics for the one year risk horizon that is not recursive or circular. Our solution is based on relevant Solvency II directives and consultation papers.
The ultimate year risk horizon is quite straightforward and is described here.
In respect of IFRS 4 we also discuss and demonstrate fungibility and ring fencing between LOBs and along calendar years (going forward).
It is only in the Probabilistic Trend Family (PTF) and Multiple PTF (MPTF) modeling frameworks that parameter uncertainty and forecast assumptions going forward are explicit, auditable, and can be monitored in a sound probabilistic framework.
The first calendar year is in distress at the 99.5th percentile, that is 1 in 200 times. Accordingly, in order to compute Solvency II metrics, it would typically be necessary to conduct 20 million simulations of the unconditional distributions of the future calendar year liability streams for which approximately a subset of 100,000 sample paths are in distress. This subset of sample paths is used to compute the conditional distributions of the payment streams from the second year onwards conditional on the first year being in distress. As our proprietary algorithms can simulate correlated lognormals conditional on the sum of lognormals (being at 99.5%), we do not take this approach. Instead, we can run 100,000 simulations unconditionally, and another 100,000 conditional simulations - a significantly faster algorithm with no loss of accuracy.
Statistical and mathematical technicalities are treated in the first video.
The second video considers a real life example comprising six LOBs. Solvency II metrics are computed (assuming full fungibility across all LOBs) for the most volatile LOB and the aggregate of all LOBs to illustrate, amongst other things, risk diversification credit of SCR and Technical Provisions.
In the third video it is shown that for the aggregate of the six LOBs,
Technical Provisions + SCR = Undiscounted Reserves (total capital at inception),
assuming a risk free rate of 4% and a spread of 6%. This is due to risk diversification credit of writing the six LOBs that are uncorrelated.
This result is far from true for the most volatile LOB4, were it the only one written!
Each year, Solvency II metrics are recomputed. In the fourth video, we discuss conditions under which estimates of prior year ultimates and related solvency II metrics are statistically consistent on updating.
In the fifth video we compare SII and IFRS 4 metrics, namely, SCR, Risk Margins and Fair Value of Liabilities, not allowing for 'surpluses' (total calendar year loss less than the mean) in one LOB to pay for a high loss (greater than the mean) in another LOB. That is, the 'surplus' is ring fenced within the LOB, so that there is no fungibility.
Indeed, portfolios of LOBs can be set up so that there is complete fungibility across LOBs in the same portfolio, but no fungibility (surplus sharing) between LOBs in different portfolios. That is, each portfolio is ring fenced.
We also consider another option and that is fungibility along calendar years going forward. That is, if the total loss in a calendar year (2+) is less than the mean, then the surplus can be used to pay for losses in future calendar years that exceed the mean.
In the sixth video we look at distressed samples, that is samples for which the first calendar year is in "distress" at the 99.5 percentile, including a situation where we remove all process variability and only have parameter uncertainty (volatility).
The seventh video considers the SII Ultimate year risk horizon with all the possible scenarios in respect of ring fencing defined portfolios and along calendar years going forward.
13.1 Our solution for calculating the long-tail liability Solvency II risk measures for the one-year risk horizon
This video discusses Best Estimates of Liabilities (BEL), Market Value Margins (or Risk Margins), Technical Provisions (TP) (or Fair Value of Liabilities), and Solvency Capital Requirements (SCR). It is emphasized that calendar year loss distributions and their correlations are necessary to compute any solvency II risk measure.
Our solution to the one-year risk horizon is shown to arise directly out of relevant directives and consultation papers. Our solution is not recursive or circular contrary to other proposed solutions.
For the one-year risk horizon, risk capital is raised at the beginning of each year. The cost of raising the risk capital, the Market Value Margin (MVM) or premium on the risk capital, is paid to the capital providers at the end of each year along with any unused risk capital. The sum of the MVMs and the Best Estimate of Liabilities (BELs) for each calendar year is the Technical Provision (also referred to as Fair Value of Liabilities).
We show that based on the directives and definition of the first calendar year being in distress that SCR is given by the equation,
SCR = VaR(1) + ΔTP,
where VaR(1) is the VaR99.5% for the first calendar year and ΔTP is the additional technical provisions for the subsequent years if the first year is in distress.
The SCR is required to cover the losses for the distressed year (VaR99.5%) and to restore the Fair Value of Liabilities (Technical Provisions) of the balance sheet at the beginning of the second year. The ΔTP is allocated to each year as required to ensure that there is sufficient monies to meet the additional BEL and sufficient MVM to raise the risk capital in the event of the first (next) calendar year being a distressed year.
13.2 Solvency II risk measures for a real life example involving six LOBs
The second video considers a real life example comprising six LOBs. Solvency II metrics are computed for the most volatile LOB and the aggregate of all LOBs to illustrate, amongst other things, risk diversification credit of SCR and Market Value Margins (a component of the Technical Provisions).
The most volatile LOB (LOB 4) possesses much process variability and a large degree of parameter uncertainty. The base development year trend is not negative until development year six. As a result of this, and a high calendar trend assumption going forward, it takes ten years before 50% of the total reserves are paid out. More importantly, the high correlations of the calendar year paid loss distributions have a large impact on the additional technical provisions required if the first (calendar) year is in distress, resulting in an SCR that is a high proportion of BEL.
By contrast, for the aggregate of the six LOBs, 50% of the total reserves are paid out in the next two years and the paid loss calendar year correlations are comparatively low. Furthermore, there is only significant (low) process correlation between two LOBs. Accordingly, there is significant diversification credit in respect of SCR and risk capital allocation by writing the six LOBs instead of just the single, most volatile LOB.
It is also important to point out that the most volatile LOB4 comprises only 11.2% of the total mean reserve across the six LOBs. Next year in distress is principally driven by the largest LOB3 which comprises 66% of the total mean reserve.
Indeed, if only the most volatile line was written, then the SCR as a percentage of BEL, is close to 100%! For the aggregate, however, the SCR as a percentage of BEL is only 11%.
Also see video chapter 13.3 for a comparison of total capital and undiscounted reserves.
13.3 Comparison of TP+SCR with undiscounted BEL- Risk diversification of SCR and MVM
In the third video it is shown that for the aggregate of the six LOBs,
Technical Provisions + SCR = Undiscounted Reserves,
assuming a risk free rate of 4% and a spread of 6%. This is due to risk diversification credit of writing the six LOBs that are uncorrelated.
This result is far from true for the most volatile LOB4, were it the only one written!
In practice, we conclude that Solvency II capital requirements are not expected to be a burden on insurers or reinsurers with good diversification between LOBs, even when highly volatile lines are written as part of their portfolio.
13.4 Consistency of estimates of prior accident year ultimates and Solvency II metrics on updating
It is first explained using a simulation example that calendar year trends project onto development years and accident years. Consequently, estimates of prior accident year ultimates and Solvency II risk measures are statistically consistent on updating only if assumptions going forward are consistent.
One does not want to be in a position where the distress situation in the next calendar year is due to model error distress rather than outcomes in the tail of the projected loss distributions.
A real life example is considered involving updating that illustrates the importance of explicit assumptions going forward and shows consistency of estimates of prior accident year ultimates and Solvency II metrics.
13.5 SII and IFRS 4 metrics excluding fungibility across LOBs and calendar years going forward, relative to assuming fungibility
In respect of IFRS 4 exposure draft Exposure Draft (2010), BC119 the level of aggregation of Risk Margins is considered in the context of degree of fungibility.
Ring fencing and fungibility are also discussed in QIS5, SCR.
In this video we compare SII and IFRS 4 metrics, namely, SCR, Risk Margins and Fair Value of Liabilities, not allowing for 'surpluses' (total calendar year loss less than the mean) in one LOB to pay for a high loss (greater than the mean) in another LOB. That is, the 'surplus' is ring fenced within the LOB.
It turns out that for the specific example under consideration involving the six LOBs there is very little loss in diversification by ring fencing.
We also consider another option and that is fungibility along calendar years going forward. That is, if the total loss in a calendar year (2+) is less than the mean, then the surplus can be used to pay for losses in future calendar years that exceed the mean.
13.6 Distressed Samples driven by process volatility and parameter volatility
One of the conditions of Solvency II one-year risk horizon is that the metrics are all based on the first year (next calendar year) being in distress at the 99.5% percentile.
The next calendar year is in distress based on two (related) drivers, namely, process volatility, parameter volatility (uncertainty) or both.
In this video we study distressed samples, that is samples for which the first calendar year is in "distress" at the 99.5 percentile, including a situation where we remove all process volatility and only have parameter volatility (uncertainty).
13.7 The Ultimate Year Risk Horizon
The Solvency II regulatory Capital Requirements are based on the one-year risk horizon.
However, in respect of running the business the company may decide to consider the ultimate year risk horizon.
In this video we compute the SII and IFRS 4 metrics for the aggregate of the six LOBs considered in video chapters 13.2 and 13.3 for the ultimate year risk horizon and compare the metrics with those obtained for the one-year risk horizon. Naturally, the ultimate year risk horizon is much more onerous in respect of Risk Capital and Risk Margins than the one-year risk horizon.
Four possible scenarios for fungibility by LOB and calendar year are also considered.
13.8 Solvency II and multiple distress years
In this video we explore the ability of the ICRFS Solvency II module to take into account more than one future year in distress. This applies in cases where the Solvency II paradigm includes the consideration that future years will also need to be adjusted for the requirement of rebalancing after the following twelve months turns out to suffer losses at a distress level corresponding to the 99.5th percentile.
Solvency II output is compared for a sample dataset with 1, 2, 3 and 4 future years in distress. Changes are seen to occur only in the MVM, these changes are traced through the tables and explained.
Solvency II Capital requirements for each LOB and the aggregate of all LOBs are only met by ICRFS™ in a sound statistical framework.
14.0 Other applications
14. Other applications of the MPTF modeling framework
14.1 Credibility modeling Compa using industry Maa951
Compa has high process variability.
In this case the final development and calendar trends are both zero statistically, after optimization. The future calendar period trend assumption is especially crucial in forecasting. We show how to credibility adjust these two trends which were rejected as statistically significant using Compa data (only). The credibility adjustments are based on industry data Maa951.
In the absence of collateral data it is prudent to use the insignificant calendar year trend for forecasting in the presence of high process variability.
A more systematic approach to this problem is to use credibility modeling - if collateral data are available.
In this case CompA represents Auto BI data from a single company representing about 3% of the market. The TG Maa951 represents the total industry and obviously exhibits much lower process variability.
The industry data have high unstable calendar year trends and a final negative development year trend, whereas Compa has insignificant (zero) calendar year trend and an insignificant (zero) final development period trend. You may recall that Compa, even though possessing high process variability, is stable in respect of trends. Removal of the last nine calendar periods gave very good predictions of the distributions of the 161 observations left out, and reserve distributions beyond the last calendar period are the same as using all the data!
When we run the two lines together as a composite in MPTF we find a correlation of around 0.25 between the datasets. The credibility model for CompA is then formed by the MPTF model which takes the correlations into account and is formed by freshly evaluating all the parameters on this basis.
After some exploration we find that the credibility model for CompA contains a positive calendar (inflation) trend, but does not support a negative final development trend. The forecast from this model is more conservative than the one based on following through on the insignificant trend.
14.2 MPTF Net of Reinsurance versus Gross. Is outward reinsurance optimal?
One application of ICRFS™ is the evaluation of outward reinsurance programs and optimal retention. In this video we study an example of Gross data versus Net of reinsurance data. Process correlation between the two segments is very high as expected, and the trend structure is almost identical.
14.3 Credibility modeling small arrays (Company X and Company Y)
In some circumstances we have a small array containing data with high volatility, but we also have a larger array of data for the same LOB which we believe represents a good approximation to the larger context of our original dataset. We want to use credibility modeling to extend our model and hence our forecasting ability beyond the limitations of the data. We explain how to place the data in a larger array filled out with zeros so that MPTF modeling is possible. This is carried out for illustrative purposes with real data called Company X and Company Y.
15.0 Bootstrap
15. The Bootstrap: how it shows the Mack method doesn't work
"To
kill an error is as good a service as, and sometimes even better
than, the establishing of a new truth or fact!" |
||
- Charles Darwin |
Bootstrap samples of the Mack method provide another compelling reason, amongst the numerous others, that it does not work. That is, it gives grossly inaccurate assessment of the risks.
The Mack method is a regression formulation of volume weighted average link ratios, the latter also known as the chain ladder method.
The idea behind the bootstrap is an old one. It is a re-sampling technique popularized by Brad Efron (1979) in his celebrated Annals of Statistics paper. Efron drew our attention to its considerable promise and gave it its name.
The bootstrap technique is used to calculate standard errors of parameters, confidence intervals, distributions of forecasts and so on. Typically, it is used when the sample size is small so that distributional assumptions cannot be tested and asymptotic results are not applicable. It also has applications to large sample sizes where distributional and model assumptions can be tested but the mathematics for computing forecast distributions is intractable.
For a paper on the bootstrap and the Mack Method click here.
The bootstrap technique is not a model and it does not make a bad model good.
Bootstrap samples are generated subsequent to a model being fitted to the data. A bootstrap sample (pseudo-data) has the same features as the real data only if the model satisfies assumptions supported by the data.
Accordingly the bootstrap technique can be used to test whether the model is appropriate for data.
In these video chapters we compare bootstrap samples for the Mack method versus bootstrap samples based on the optimal PTF model. We find that bootstrap samples (pseudo data) based on the Mack method (and related methods) do not reflect features in the real data - you can easily distinguish between the real data and the bootstrap samples. However, you cannot distinguish between bootstrap samples based on the optimal PTF model and the real data!
If the bootstrap samples do not replicate the features in the real data then the model is bad.
We study two LOBs;
- Triangle Group (TG) "ABC BS"
- Triangle Group (TG) "LRHigh BS"
Both datasets are real with changing calendar year trends. Moreover, the incremental paid losses in the "LRHigh BS" TG are heteroscedastic versus development period. That is, percentage variability varies by development period. This is another feature that the Mack method cannot capture, as shown by the Mack bootstrap samples.
In each case it is shown that the Mack method does not capture calendar year trends and the corresponding bootstrap samples bear no resemblance to the real data. This is not the case with the optimal PTF model.
15.1 Introduction to the Bootstrap
This video provides an introduction to the bootstrapping re-sampling technique using a PowerPoint presentation. It is emphasized that (i) standardized residuals residuals represent trends in the data minus trends estimated by the method; (ii) bootstrap samples based on a good model have the same salient features as the real data, and (iii) the bootstrap technique works if the weighted standardized residuals of a model come from the same distribution. If there is any structure in the residuals corresponding bootstrap samples do not resemble features in the real data. Accordingly, the bootstrap technique can be used to test the validity of the model for the (real) data.
15.2 Overview of the Mack method and the PTF modeling framework
The Mack method is a regression formulation of the link-ratio technique termed volume weighted averages. We use a real data set to explain the Mack method and how to calculate residuals. An extensive study of the Mack method and its relatives that all belong to the Extended Link Ratio Family (ELRF) modeling framework is given in video chapter 1.2 The Link Ratio Techniques (LRT) and the Extended Link Ratio Family (ELRF) modeling frameworks. Examples of Mack and other related methods fitted to real data is given in video chapter 2. Applications of the PTF and ELRF modeling frameworks.
An overview of the Probabilistic Trend Family (PTF) modeling framework is also given using a simulated data set. A more extensive study of the PTF modeling framework and its applications to real data is given in Chapter 3.1.
The Mack method does capture calendar year trends. Here also by way of a simulation we show that when we have data with a 10% calendar year trend the Mack method does capture the trend but there are no descriptors of it.
15.3 Bootstrap TG ABC BS
These data have major calendar year trend shifts that are quantified by the optimal PTF model.
We first create a bootstrap sample of the triangle values assuming they all come from the same distribution, that is, we randomly reshuffle the values into the different cells. This is done by setting all fitted parameters to zero. In this case bootstrapping the residuals is the same as bootstrapping the observations. Naturally the bootstrap triangle has very different structure to the real data. Most practitioners would argue that this is a silly thing to do. We agree! Furthermore, it is just as silly to bootstrap the residuals if the residuals of a model have any type of structure in them. That is, the scaled residuals are not random from the same distribution.
The Mack method applied to the corresponding cumulative array has residuals that exhibit calendar year trend changes (structure). That is, the residuals are not random from the same distribution. Bootstrap samples based on the Mack method are easily distinguishable from the real data, yet bootstrap samples based on the optimal PTF model are indistinguishable from the real data.
15.4 Bootstrap TG LR High BS
The residuals of the Mack method apply to these data exhibit a very strong negative trend. This means that the trends estimated by the (Mack) method are much higher than that in the data. Accordingly, the answers are biased upwards by about a factor of two. Bootstrap samples based on the Mack method are easily distinguishable from the real data, yet bootstrap samples based on the optimal PTF model are indistinguishable from the real data. The real incremental data has major calendar year trend shifts, and the quantity of process variation (on a log scale) varies by development period. Neither of these features are captured by the Mack method.
16.0 Updates
16. Updates from 10.6 to 11
16.1 Database updates
In particular:
- The currency system variable is introduced.
- Descriptions for each database object within a triangle group is shown.
- Control over composite order is covered briefly.
16.2 PTF updates.
In particular:
- Future and reserving accident periods are now controlled in a single dropdown.
- Subsets of future calendar years can be selected.
- Multiple future calendar periods can be conditioned on for the purpose of calculating conditional statistics.
16.3 MPTF updates.
In particular:
- Datasets are displayed in list view. Previous tabbed view is available via the display preferences.
- Forecast and forecast combinations dialog boxes are combined.
- Control over currency is available when multiple currencies are present.
- PALD and Solvency II(*) can now be run on any aggregate in the forecast.
(*) Some additional restrictions apply to solvency II - for instance, at least one positive factor must exist in the selected aggregate.
16.4 ELRF bootstrap update
In particular:
- The bootstrap technique can be applied to any average link ratio (only) model in ELRF.
- Output is available similar to PALD including VaRs, T-VaRs, and distributions by accident year, calendar year (if not incurred data), and total.
- The bootstrap sample can be centered around the model mean or the sample mean.