Ethical standards and AI in insurance

Lifting the lid on "one of the most consequential challenges"

Ethical standards and AI in insurance

Technology

By Daniel Wood

Earlier this year the government published what it called an “interim response” to the challenges of artificial intelligence (AI). The response included eight AI Ethics Principles designed to make the technology safe, secure and reliable while further regulation is considered.

The insurance industry is a big investor in AI. Some estimates say the value of AI in insurance will reach more than US$45 billion globally by 2031.

However, there are serious concerns about AI use. “Getting AI governance right is one of the most consequential challenges of our time,” says the UNESCO website, the world’s education and sciences agency.

AI is one focus of the upcoming Women in Insurance Summit Australia in Sydney. A panel discussion will explore successful AI use cases across insurance disciplines and provide an overview of ethical issues.

AI in legal practice and insurance

“Legal practice and insurance are two of the top five sectors investing in AI in Australia and that’s heavily driven by recognition of the incredible potential that AI brings to both professions,” said Jehan Mata (pictured above), partner with Sparke Helmore Lawyers. She specialises in professional indemnity and casualty claims and leads her firm’s cyber insurance practice.

Sparke Helmore is a Gold Sponsor of the Summit.

Mata said generative AI has the potential to make both claims and case handling work much more efficiently.

“Indeed, that efficiency is where some are seeing the threat - although at Sparke Helmore we see it as more of an opportunity to upskill,” she said.

From an ethical perspective, Mata said an AI tool ingesting client data would need to be subject to the same ethical standards as a lawyer around issues like confidentiality and client privilege.

“That requires extensive security measures to ensure that data is appropriately compartmentalised and secured,” she said. “More broadly, with the prevalence of cyberattacks in Australia, many AI tools are simply too risky to be used with sensitive data.”

Keep AI tools secure and in-house

Mata said it is “absolutely imperative” that confidential information is never shared with public tools like Chat GPT or Bard. Any AI tools, she said, need to sit on secure, in-house, physical servers and all staff need to be thoroughly trained.

“That’s a big ask, of course, but if it means overcoming the threats posed by AI and unlocking the opportunities for our clients, we think it’s very worth the investment,” said Mata.

The lawyer said her firm is looking at “narrow focused” use cases for AI in some high-volume parts of the business. Mata said these pilot programs “absolutely ensure data security and integrity” and are also seen as a way of inculcating a culture of innovation.

She said “getting the work right” before deploying these platforms publicly is important in the highly regulated legal and insurance sectors.

Flipping the insurance model

Suzi Leung, chief commercial officer for Hollard Insurance (Hollard), is chairing the Summit. She is very positive about AI’s potential to help insurance customers.

“Where I see AI really helping, is as a way of reducing customers’ cognitive load in times of stress after an insured event,” she said. “For example, if my home was flooded and the insurer knows that and has the data to support it - because of data showing that a big cyclone ripped through.”

Insurance CEOs support more regulation

A KPMG report found that insurance CEOs see AI’s ethical issues and the current lack of regulation in the space as its biggest challenges.

While the government continues to consider targeted regulations, more than 70% of insurance CEOs surveyed by KPMG agreed that “A robust regulatory framework for AI is needed that’s proportional to the risks.”

1. Human, societal and environmental wellbeing

AI systems should benefit individuals, society and the environment.

2. Human-centred values

AI systems should respect human rights and diversity.

3. Fairness 

AI systems should be inclusive and accessible and should not result in discrimination against individuals or groups.

4. Privacy protection and security

AI systems should respect and uphold privacy rights and data protection.

5. Reliability and safety

 AI systems should reliably operate in accordance with their intended purpose.

6. Transparency and explainability 

There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI.

7. Contestability

When an AI system significantly impacts a person, group or environment there should be a timely process to allow people to challenge its use.

8. Accountability

People responsible for the different phases of the AI system lifecycle should be identifiable and accountable.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!