The Chartered Insurance Institute (CII) has called for clear accountability frameworks and a sector-wide skills strategy to guide the use of artificial intelligence (AI) in financial services, in recommendations submitted to the Treasury Select Committee (TSC).
Representing over 120,000 professionals, the CII said that both individuals and institutions must remain accountable for decisions driven by AI systems.
It advised that professionals should be prepared to take responsibility for the outcomes of AI applications, whether through their design or through ongoing monitoring. The submission also emphasised the need for broad-based education on the risks of AI mismanagement.
According to the CII, institutions should be held accountable for the actions of their algorithms, regardless of the complexity of explaining how those algorithms reach conclusions. It advocated for mandatory validation and testing to assess the potential for discriminatory outcomes, with the results made publicly available to promote transparency.
The CII recommended that regulatory oversight of AI in financial services adopt a proportionate approach. It proposed that all staff within regulated firms be trained on the opportunities and risks associated with AI use. The aim, it said, is to enable firms to deploy AI effectively while maintaining consumer protection standards.
Accountability in the use of artificial intelligence (AI) within the insurance and reinsurance sectors has become a focal point for regulators, industry stakeholders, and policymakers. This heightened attention stems from the rapid integration of AI technologies into various aspects of insurance operations, including underwriting, claims processing, and customer service.
Internationally, the European Union's Artificial Intelligence Act (AI Act) came into force on August 1, 2024, with phased implementation over the next 24 to 36 months.
The AI Act establishes a common regulatory and legal framework for AI within the EU, classifying AI systems based on risk levels – unacceptable, high, limited, and minimal – with stringent regulations applied to high-risk AI technologies used in sectors like healthcare and law enforcement.
In the UK, the FCA also has expressed concerns that AI use in insurance could lead to some individuals becoming "uninsurable" due to hyper-personalization and potential discrimination.
The submission referenced findings from the CII’s long-running Public Trust Index, which draws on consumer research to assess attitudes towards the insurance sector. It noted that AI has the potential to enhance key areas valued by consumers and SMEs – specifically cost, protection, usability, and confidence.
In support of its governance recommendations, the CII highlighted its existing resources, including the Digital Companion to the Code of Ethics and its guidance on Addressing Gender Bias in Artificial Intelligence. These tools are designed to assist professionals and organisations in adopting responsible AI practices.
Dr Matthew Connell, director of policy and public affairs at the CII, said the insurance sector has long used AI, but that ongoing evaluation is essential to ensure it serves both industry professionals and consumers.
To support professional development, the CII has created a suite of learning resources on AI. These include an introductory course in data science and AI, as well as CPD content and guidance documents exploring the benefits and risks of emerging technologies.
What are your thoughts on this story? Please feel free to share your comments below.