I always find it ironic that in an industry that relies so heavily on data for things like modelling and actuarial assessments, we are notoriously bad at collecting it. And where we do have it, it is usually in a format that is no use to man nor beast.
Another bugbear of mine is the incessant chatter I hear in the market about the drive for digitalisation. Advancements in technology are exciting, but if we have not got to grips with the currency of digitalisation – data - then all the infrastructure and the systems in the world will not deliver the benefits they promise. This is particularly the case if you have to spend endless amounts of time extracting the relevant data, and a similar amount of time standardising it in order to feed the process.
Not only are we notoriously bad at collecting data in a useful form, we have also politicised it. For those that manage to capture even a small sub-set of data in a digestible format, this can translate into a chargeable service. If that service goes beyond simple data cleansing and into valuable insight, this has the potential to create tangible competitive advantage. Then there is the question of who owns the data and who can control it.
But let’s not kid ourselves, there is a huge amount of data that does not fall into this category. It is what we call transactional data, and it oils the machines that get premiums paid and claims settled. It is for this reason that the current data work at Lloyd’s is concentrating on uncovering the Core Data Record (CDR). The CDR is the data that is needed for our downstream processes and reporting. It does not attempt to cross over into ‘value creation’ territory, nor does it look at risk selection data. By doing this Lloyd’s has tried to de-politicise data and turned its focus towards grass roots processing and sourcing only the data that is needed to achieve this.
As the sponsor of the data workstream I am acutely aware of the fact that identifying these core fields is complicated. There is also the temptation to put in the “nice to haves”. Even more complicated is the process by which we harvest them. So much of what we need is tied up in the Market Reform Contract, or MRC, which in and of itself was never meant to be a data capture tool. Not only that, but the curse of the schedule and the section have traditionally thwarted previous attempts to de-code the MRC. We are going to have to think very creatively in order to find a workable, flexible, interim solution to these conundrums, at least until such time as we enable fully digital contracts.
Lots of work has and is being done to help solve the issue. Many third-party providers, including the market’s digital trading platform providers have risen to the challenge. However, for this to work we need all of the data, all of the time, irrespective of the route to market. For this there needs to be a monumental joining of the dots.
The more we can extract, augment and scrape from existing sources the better. We need to minimise the amount of data that market players input and ensure that where we do need data keyed, it is keyed by the people that will ultimately benefit from the service or report that it is there to facilitate, either jointly or individually. If we don’t, we risk politicising it all over again.
The spirit with which we approach the next phase of the journey is crucial. This phase aims to answer the key questions of, the what, the how and the who. If we get this right, we can start to really reap the benefits of the many “technology carts” that are already doing the rounds in the market. We may even be able to upgrade to a driverless car.