Earlier this year the government published what it called an “interim response” to the challenges of artificial intelligence (AI). The response included eight AI Ethics Principles designed to make the technology safe, secure and reliable while further regulation is considered.
The insurance industry is a big investor in AI. Some estimates say the value of AI in insurance will reach more than US$45 billion globally by 2031.
However, there are serious concerns about AI use. “Getting AI governance right is one of the most consequential challenges of our time,” says the UNESCO website, the world’s education and sciences agency.
AI is one focus of the upcoming Women in Insurance Summit Australia in Sydney. A panel discussion will explore successful AI use cases across insurance disciplines and provide an overview of ethical issues.
“Legal practice and insurance are two of the top five sectors investing in AI in Australia and that’s heavily driven by recognition of the incredible potential that AI brings to both professions,” said Jehan Mata (pictured above), partner with Sparke Helmore Lawyers. She specialises in professional indemnity and casualty claims and leads her firm’s cyber insurance practice.
Sparke Helmore is a Gold Sponsor of the Summit.
Mata said generative AI has the potential to make both claims and case handling work much more efficiently.
“Indeed, that efficiency is where some are seeing the threat - although at Sparke Helmore we see it as more of an opportunity to upskill,” she said.
From an ethical perspective, Mata said an AI tool ingesting client data would need to be subject to the same ethical standards as a lawyer around issues like confidentiality and client privilege.
“That requires extensive security measures to ensure that data is appropriately compartmentalised and secured,” she said. “More broadly, with the prevalence of cyberattacks in Australia, many AI tools are simply too risky to be used with sensitive data.”
Mata said it is “absolutely imperative” that confidential information is never shared with public tools like Chat GPT or Bard. Any AI tools, she said, need to sit on secure, in-house, physical servers and all staff need to be thoroughly trained.
“That’s a big ask, of course, but if it means overcoming the threats posed by AI and unlocking the opportunities for our clients, we think it’s very worth the investment,” said Mata.
The lawyer said her firm is looking at “narrow focused” use cases for AI in some high-volume parts of the business. Mata said these pilot programs “absolutely ensure data security and integrity” and are also seen as a way of inculcating a culture of innovation.
She said “getting the work right” before deploying these platforms publicly is important in the highly regulated legal and insurance sectors.
Suzi Leung, chief commercial officer for Hollard Insurance (Hollard), is chairing the Summit. She is very positive about AI’s potential to help insurance customers.
“Where I see AI really helping, is as a way of reducing customers’ cognitive load in times of stress after an insured event,” she said. “For example, if my home was flooded and the insurer knows that and has the data to support it - because of data showing that a big cyclone ripped through.”
A KPMG report found that insurance CEOs see AI’s ethical issues and the current lack of regulation in the space as its biggest challenges.
While the government continues to consider targeted regulations, more than 70% of insurance CEOs surveyed by KPMG agreed that “A robust regulatory framework for AI is needed that’s proportional to the risks.”
The Australian government has released a voluntary framework for AI governance:
AI systems should benefit individuals, society and the environment.
AI systems should respect human rights and diversity.
AI systems should be inclusive and accessible and should not result in discrimination against individuals or groups.
AI systems should respect and uphold privacy rights and data protection.
AI systems should reliably operate in accordance with their intended purpose.
There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI.
When an AI system significantly impacts a person, group or environment there should be a timely process to allow people to challenge its use.
People responsible for the different phases of the AI system lifecycle should be identifiable and accountable.