7 June 2019
A key question for insurers is how to gain a competitive advantage by using data effectively, given the huge amount and range of analytical tools now available.
Reserving is a particularly important area to develop as it is typically underutilised and underinvested in by insurers. As a result, it is an area where it is relatively easy for insurers to gain a competitive advantage. This contrasts with areas such as pricing, where many firms have already invested heavily, and gaining a competitive advantage is becoming more challenging.
We recently presented two case studies to a forum of insurance actuaries, on how to use data science techniques to enhance traditional insurance reserving approaches. The starting point for each case study was three key objectives:
• Improving the efficiency of the core reserving process – specifically to reduce time spent on the mundane areas of work and to allow more time for the reserving actuaries to focus on the most useful analysis and to feed back insights to the claims and pricing teams.
• Improving the quality of the reserving process – with the aims of: increasing confidence in the reserves; spotting trends earlier; and reducing the chance of reserving surprises.
• Ensuring the new process is easy to understand and communicate – to reduce the risk of misinterpreting the results, and to ensure confidence in any changes to the reserving process.
We identified that a good way of meeting these objectives was to use data science techniques to enhance traditional reserving approaches. By retaining traditional reserving concepts such as development patterns and initial expected loss ratios, the new processes were easy to understand and communicate. In addition, this approach meant that the benefits from data science techniques could be incorporated into existing reserving processes with minimal disruption.
The two case studies were:
• Multi-factor reserving – Identifying optimal segmentations of the data and automating the judgements that need to be made when applying standard reserving approaches. This example used data science techniques such as clustering and hold-out sampling. From applying this to a book of motor insurance, we highlighted the importance of vehicle value on the development of accidental damage claims, and driver age on third party property damage claims.
• Diagnostics – Efficiently identifying important trends and anomalies in reserving triangles. This example used the data science technique of feature engineering. From applying this to a book of liability insurance we highlighted a slowing down in how quickly large claims were identified by claims handlers, and a step change in the percentage of nil claims, both of which had important implications for the reserving of this book.
See below if you would like a brief explanation of the data science techniques we mention in this blog. Please get in touch if you would like to know more about the case studies or LCP’s wider analytics work.
Data science techniques – a brief explanation
Clustering: Grouping a set of objects so that objects in the same group are more similar to each other than those in other groups (or “clusters”). In our case study we grouped sets of claims into those that have similar development.
Hold-out sampling: The full dataset is split into two. The model is fitted to one part, and the performance of the model is assessed on the other. In our case study we assessed the predictive power of using different averaging periods for a development pattern to predict recent claims experience.
Feature engineering: Using domain knowledge of the data to create features that make machine learning algorithms work. In our case study we created summary metrics which captured the extent to which there was an increasing trend or anomaly in over 1,000 reserving triangles.
Solvency II reporting across the UK and Ireland
Our third annual review of Solvency II reporting by 100 of the top non-life insurers in the UK and Ireland.Access the findings