# GI ROC Reserving study 2009

In 2009 the General Insurance Reserve Oversight Committee (GI ROC, UK) initiated a research project on the effectiveness of reserving methods.

As part of this actuaries were invited to respond to a practical survey in which they would analyse and forecast a randomly provided dataset drawn from real company data using any combination of a number of pre-given standard methods.

The procedure was set up in Excel spreadsheets which were specially designed so that the data was presented in stages corresponding to annual reports. In the initial stage there was historic data for ten years and only after fully formulating a reserving procedure and forecasts on the basis of this level of historic experience was the participating actuary given the following two year's data.

The data in each case consisted of Paid Losses(PL), Incurred Losses (IL) and Outstandings (OS) a.k.a. Case Reserve Estimates or CREs. There was usually one or more exposure vectors and in some cases considerable additional information relating to claim counts and costs at closure. In all cases a brief verbal description of the line of business was also given.

The exercise was thus designed to determine the predictivity and responsiveness of reserving methods as new data became available in the next few years.

Insureware was very happy to be invited to participate in this exercise, but since our modelling approach does not work via traditional methods or development factors we could not present our results in the interactive Excel spreadsheets that were originally used.

Accordingly we created heuristic reports on datasets given, D, F, H and I and have structured these reports to follow what we believe to be the spirit of the exercise. We show how we would typically respond at each stage of the process while in possession of only the data available at that time.

The opening paragraphs of the Insureware report are reproduced below. The complete PDF is archive available here (3.12 MB) .

The Insureware report was co-authored by David Munroe, (Insureware), David Odell (Insureware), and Ben Zehnwirth (Professorial visiting Fellow University of New South Wales and Insureware).

### Introduction and Summary

To each of the datasets D, F, H and I we identify (design or build) the optimal model in the Probabilistic Trend Family (PTF) modelling framework for the incremental paid losses and the Case Reserve Estimates (CREs).

A model belonging to the PTF modelling framework is depicted by four graphs; the trend structure for each of the three directions development period, accident period and calendar period, and the quality of the process variance about the trend structure. Forecasting scenarios for the paid losses are based on the information extracted from the paid losses in respect of stability of calendar year trends, and any identified 'trend relationship' between the paid losses and the CREs. In the PTF modelling framework the actuary has control on formulating forecast scenarios for the future related to past experience. These scenarios are explicit, audit able and can be monitored in a sound probabilistic framework.

The Mack Method is also applied to the cumulative paid losses and the incurred losses. These methods are tested in respect of capturing the volatility in the data and also in respect of their degree of predictive power.

In order to calculate the Cost of Capital the probability distribution by calendar of the liability stream needs to be computed. We do this based on explicit and auditable assumptions that can be monitored on updating in a sound probabilistic framework. The volatility in the future paid losses cannot be extracted from the incurred losses array, notwithstanding the fact that for most cumulative arrays the Mack and related link ratio methods give grossly inaccurate indications.

### Summary of results

In general, standardized residuals of a fitted model exhibit the remaining structure in the data adjusted for the fitted parameters. Equivalently, they represent trends in the data minus the trends estimated by the model. For an optimal model residuals are random around zero so that trends in the data equal the trends estimated by the model.

### Dataset D

The identified model structure for dataset D did not have any calendar year trend changes in the available data. As a result, the modelling and choice of future structure was straightforward. Monitoring will be important in order to pick up on any trend changes should they occur in the future.

The Mack method, when applied to this dataset, while it gave answers which were comparable to the optimal PTF model (though on the low side), does not have predictive power, does not quantify the structure in the data and therefore is not preferable for selection. Furthermore, the Mack method does not project volatility in the liability stream - a necessity for any cost of capital allocation.

### Dataset F

In the paid losses to date, no calendar trend changes have been observed. There have been two increases by accident period, however, and these would need to be taken into consideration when computing any underwriting risk. By contrast, the Case Reserve Estimates (CREs) have an identified trend of 22%+_3.5% for the most recent two years. It is necessary to determine the reason for this increase before applying this increase to the Paid Losses.

Once again, with no calendar trend changes (as found in the optimal PTF model), the Mack method still suffered from the same problems as observed for dataset D - under projection of the total reserve and the ratios lacking in predictive power.

The optimal PTF model was selected.

### Dataset H

Both the (incremental) paid losses and the case reserve estimates (CREs) possess major trend shifts in recent calendar years. These suggest shifts in closure rates. The trend change in the Paid Losses is in the opposite direction as compared to the CRE. The relationship between the two data types is suggestive of a hypothesised forecasting scenario going forward. This hypothesised scenario could be more fully tested if we had access to the number of closed claims (NCC) triangle.

For this dataset the Mack Method applied to the cumulative paid losses and the (cumulative) incurred losses gives answers that are ridiculously low. In order to obtain the same mean given by Mack for the Incurred Losses from the optimal PTF model, we need to assume a calendar year trend of -25% +_ 3.46% over a (future) 10 year period. Another PTF scenario which gives the same answer as the Mack method on Incurred is -69% +_ 19% for one future calendar year followed by 0 for the remainder of a 30 year (future) period. Neither of these future forecast scenarios are remotely plausible; they result in answers around half of an optimistic scenario produces.

The Mack method does not capture calendar year trends, hardly has any predictive power, does not have any descriptors of the volatility in the data and it is unknown what calendar year trend assumption is made in forecasting - the Mack method does capture an average calendar year trend but there is no descriptor of it.

Indeed, we use the bootstrap technique to show that bootstrap samples from the Mack method are not related to the data and therefore the method has absolutely nothing to do with the features in the data.

As a result of the more recent calendar trend changes, a number of scenarios for the future are considered. As the next years' data become available, the most appropriate scenario can be selected. Until the data are available, a conservative approach is adapted. Naturally in practice the forecast scenario would be revisited at year end 2007 rather than waiting two years as for this study.

### Dataset I

As for dataset H, both the (incremental) paid losses and the case reserve estimates (CREs) possess recent major shifts in calendar year trend suggesting that this may be driven by shifts in closure rates. However, the evidence of this hypothesis is not as strong here. The relationship between the two data types is suggestive of a hypothesised forecasting scenario going forward. This hypothesised scenario could be more fully tested if we had access to the number of closed claims (NCC) triangle.

The underlying calendar trend in the paid losses, as identified in the optimal PTF model, is 16.95% +_ 3.73% interrupted in 03-04 with a 65% +_ 11% trend change. This trend change is observed in the residuals of the Mack method, however the method is neither able to account for this change or quantify it.

For this dataset the Mack method applied to both the cumulative paid losses and the incurred losses gives mean answers that appear too high. However, if the trend in the paid losses reverts to 65% +_ 11% (the trend between 03-04), and then continues with most recent trend of 16.95% +_ 3.73% to calendar year 2036 then the Mack applied to the incurred data gives (only) a reasonable mean. However we argue in the body of the document that this scenario is pessimistic and quite unlikely.

The most likely scenario is to continue with the 16.95% +_ 3.73% trend. We would have more evidence to support this conclusion if we had the number of closed claims (NCC) triangle.

### Composite model with capital allocation by line and calendar year

We can combine the four PTF models described above in one joint model in the MPTF framework. This modelling framework detects the process correlations between individual lines and fine-tunes that model parameters using them. The resulting Reserve Correlations are generally smaller than the corresponding process correlations but do have a significant effect on aggregate standard deviations and risk capital allocations. We give the highlights of such an analysis, under the assumption that **the four datasets correspond to four lines of business in the same company**.

### Conclusion

In respect of a model belonging to the Probabilistic Trend Family (PTF) modelling framework the parameters estimates and process variability are depicted by four graphs. So if residuals do not have any structure, it is immediately clear that the trend structure in the data and the quality of the process variability have been fitted 'accurately' by the model. Moreover, the actuary has control on parameters (including calendar year trends) in formulating a forecasting scenario for the future.

By contrast, the Mack method does not have descriptors of the trend structure in the data (and does not model development period zero). It often lacks predictive power, and does not capture (and measure) calendar year trend changes. Moreover, often the weighted standardised residuals of Mack are skewed to the right as a result of large percentage variation on a log scale of the corresponding incremental data.

Our emphasis is not just on ensuring consistent estimates of prior year ultimates on updating, but also on the probability distributions of the paid losses by calendar year and their correlations for the purpose of computing the cost of capital.

Once each data set is updated, it is only in a probabilistic framework that forecast distributions as of 2006, can be compared with observed paid losses for 2007 and 2008. Updating, forecast tracking, and monitoring of the identified model is conducted in a probabilistic framework.

The complete paper can be archive downloaded here (3.12 MB) .