This article was written by Rohit Verma, CEO, Crawford & Company.
As Generative AI gains traction in the insurance industry, expectations for its transformative power are soaring. Some technology companies tout it as a potential market panacea. The hype is ratcheting pressure on executives to move quickly in implementing GenAI tools, to keep up with the competition in pursuit of the widely anticipated productivity gains.
However, while the opportunities of GenAI are indeed enticing, they should not blind us to the pitfalls and potential dangers of rushed adoption. High standards of data security, privacy, ethics, and accuracy are fundamental to any insurance business, and building effective GenAI controls to ensure they are maintained is essential.
At this point, there is a significant gap between the considerable expectations placed on GenAI and what it can potentially deliver in practice. I believe 2024 will be a pivotal year as we will see a better calibration between potential and realization.
GenAI is currently in its nascent stage and, as with any emerging technology, comes with its imperfections. Integrating such sophisticated tools into an organizational network presents numerous risks that will require time and firsthand experience to thoroughly understand. This is particularly true in the insurance claims adjudication field, where accuracy and reliability are crucial. Although incorporating GenAI in this area offers numerous advantages, it also comes with its set of difficulties.
For example, the ‘learning’ function of GenAI involves analyzing vast amounts of stored information, some of which may be sensitive, raising the possibility of infringement of policyholders’ privacy when a GenAI tool generates a policy report and if generated based on past data, the policy report may also be inaccurate.
Likewise, because content produced by GenAI is based on thousands of examples of existing work by others, there is a genuine risk of copyright, patent or trademark infringements that could prove costly — reputationally and financially.
GenAI applications deliver rapid outcomes, but not necessarily right answers, with an estimated accuracy rate of 60% to 80% and the propensity for “hallucinations”, or errors. The insurance industry must be very wary of an over-reliance on AI output as the basis for decisions that impact the livelihoods of insured businesses and communities.
Further distortion can emanate from biases incorporated into AI models trained on skewed data and fixes for embedded bias will not be straightforward. Training an AI model to 85%+ level of accuracy requires large volume of curated data which is not always easily available in the insurance industry creating greater risk of hallucinations and bias in the final trained models.
Companies need to consider implementing a robust AI governance framework and proceeding with utmost caution, so that they can harness AI's potential while ensuring that the trust and reliability clients expect remain uncompromised. This includes a comprehensive third-party risk assessment on the tools, platforms and service partners which aids in meticulously assessing potential risks, performance, and compliance of AI tools, ensuring data privacy, security, and reliability are upheld.
These shortfalls emphasize the importance of keeping the human in the loop. Constant and thorough reviewing of GenAI output by knowledgeable claims professionals will be needed to ensure accuracy and fairness, while also informing the necessary ongoing adjustments to improve the technology’s capabilities over time.
Excellence in customer service will always require the human touch. Skilled adjusters bring qualities like empathy, understanding and nuance to the relationship with claimants experiencing loss and sometimes trauma. While GenAI may have a role in smoothing interactions with customers, policyholders going through the stressful circumstances associated with insurance claims will expect the reassurance of an understanding human in their time of need.
For adjusters, GenAI will undoubtedly change their working life for the better if integrated effectively. The technology will be tantamount to a hard-working assistant, shouldering many of the time-consuming administrative and data-trawling responsibilities, freeing the adjuster to add value by using their skills and settling claims. Through examining large amounts of information and giving summaries and conclusions, GenAI will help adjusters make sound decisions efficiently without losing effectiveness.
GenAI will not replace people or processes, but it can augment both. The most successful adoption strategies will involve incremental introduction of GenAI tools into existing workflows where it can add most value for employees and customers. Preparing staff to work comfortably with AI, training them to be aware of best practices and ethical considerations, and equipping them to review AI-generated content thoroughly, will also be key to success.
Shared standards for the ethical and responsible use of AI in the claims industry are needed. At Crawford we are developing our own AI code of conduct, and we welcome the UK claims industry’s effort to establish its own voluntary code.
As we navigate the adoption of GenAI in the insurance industry, caution must be our watchword, as we balance the promise of the technology with ensuring prudent controls, data privacy and security, and ethical use.
Through responsible AI deployment, the insurance industry can optimize processes and enhance customer experiences, while upholding its commitment to fairness and accuracy in claims management.