The Global Federation of Insurance Associations (GFIA) has submitted feedback to the International Association of Insurance Supervisors (IAIS) regarding its draft application paper on the supervision of artificial intelligence (AI).
In its response, GFIA emphasises the need for a balanced and proportionate approach to AI supervision, aiming to ensure responsible AI usage.
The association also underscored the importance of collaboration between insurers and supervisors to develop AI supervision guidelines that are both effective and adaptable.
GFIA highlighted that the list of potential AI risks in the draft paper does not fully consider existing regulatory frameworks and risk management practices.
For instance, concerning biased outcomes, the GFIA pointed out that current regulations already address permissible data analysis characteristics alongside ethical and reputational considerations that insurers apply when developing or deploying AI systems. The association suggested that these concepts should be contextualized within local regulatory environments.
The federation also noted that the rapid development of AI necessitates an open dialogue between insurers and supervisors. GFIA warned that overly prescriptive or restrictive guidance could quickly become outdated and might hinder the industry’s ability to benefit from AI advancements.
“This level of specificity may limit the ability of supervisors and insurers to adopt a flexible, risk-based approach,” the association said, “which in turn could lead to overly burdensome compliance requirements and deter insurers from exploring beneficial AI applications due to the high costs of compliance, ultimately hindering innovation and consumer benefits.”
Another concern is the definition of AI used in the paper, which GFIA said could conflate AI with other advanced data applications that have been integral to the insurance industry for years. The association said a narrower definition would help distinguish AI from other data-driven tools.
In its feedback, GFIA recommended that the IAIS explicitly recognise that data collection should have a specific link to supervisory needs and not be overly broad, aligning with Insurance Core Principles. This approach aims to ensure that data collection efforts are relevant and not unnecessarily burdensome for insurers and supervisors.
“GFIA encourages the IAIS to prioritise flexibility and proportionality in its guidance on AI supervision,” the association said. “A balanced approach that considers both the risks and benefits of AI will support responsible innovation while protecting consumers.”
AI has recently been a central discussion within the insurance industry, not just for the benefits but also for the potential risks. In a presentation last month, Apollo Underwriting said AI opens major opportunities for insurers, but there are still debates if these benefits are worth it given the risks associated with the technology.
That doesn’t seem to deter insurers, however, as according to the KPMG global tech report in 2023, 52% of respondents picked AI, including machine learning and GenAI, as the most important technology in helping them achieve their ambitions over the next three years.
Still, the number of risks associated with AI remains uncertain as the technology continues to be developed and improved. GFIA suggested that the key in AI supervision in the insurance industry is the continuous fostering of constructive dialogue between the industry and the regulators.