New guidelines released to help insurers manage GenAI risks

Eight-step approach recommended

New guidelines released to help insurers manage GenAI risks

Cyber

By Roxanne Libatique

The Financial Services Information Sharing and Analysis Center (FS-ISAC) has released new guidelines to assist financial institutions in managing risks related to Generative Artificial Intelligence (GenAI).

The report, “More Opportunity, Less Risk: 8 Steps to Manage Financial Services Data with GenAI,” provides a structured framework for data governance while addressing security, compliance, and operational concerns.

It has been released as Australian insurers accelerate the adoption of AI and emerging digital technologies to enhance operational efficiency and customer engagement. According to the “2024 ISG Provider Lens Insurance Services” report, insurers are focusing on digital transformation to optimise processes, improve risk management, and address challenges related to economic fluctuations and extreme weather claims.

Michael Silverman, FS-ISAC’s chief strategy and innovation officer, stated that GenAI offers new efficiencies but also introduces additional security challenges.

“GenAI presents enormous opportunities for financial firms to improve business operations, provide better customer service, and even improve their cybersecurity posture. However, just like any new technological development, GenAI increases security risks when it’s not leveraged in a safe and compliant manner,” he said.

GenAI guidance for financial institutions

Developed by FS-ISAC’s Artificial Intelligence Working Group, the report recommends an eight-step approach for organisations looking to integrate GenAI while maintaining compliance with industry regulations: 

  • Assess and identify risks – financial firms should evaluate how GenAI affects existing data governance frameworks and implement policies and controls to mitigate potential security gaps.
  • Data selection and oversight – establish clear criteria for selecting and managing data used in GenAI applications, with regular risk assessments to ensure compliance.
  • Monitor data lineage – implement strong access controls and data classification protocols to ensure transparency in data traceability and prevent unauthorised use.
  • Restrict access and authorisation – limit access to training datasets, ensuring only authorised personnel can modify or use sensitive data for AI models.
  • Enhance data protection measures – use encryption, differential privacy techniques, and other security practices to maintain data confidentiality and integrity.
  • Develop robust testing protocols – conduct thorough model testing and validation to identify vulnerabilities and ensure GenAI applications function as intended.
  • Address AI model vulnerabilities – apply cybersecurity best practices to safeguard against risks emerging from evolving threat landscapes.
  • Ensure vendor transparency – require third-party providers to maintain compliance with regional data governance regulations and organisational security policies.

As financial institutions continue exploring AI-driven solutions, the report aims to provide guidance on balancing innovation with security and regulatory considerations.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!