The growing use of generative artificial intelligence (GenAI) in Australia’s insurance industry is prompting both enthusiasm and caution as insurers explore its potential to reshape traditional processes while navigating new risks.
But as adoption expands, industry leaders and regulators are closely examining governance standards, ethical implications, and compliance requirements.
Taylor Fry’s RADAR FY2024 report suggested that while AI uptake is accelerating in Australia, the frameworks designed to manage associated risks need updating.
The report flagged a potential gap between the current regulatory structure and the demands of emerging AI applications, particularly where existing policies may unintentionally include AI-related incidents.
In addition, global research by NTT Data suggested a disconnect between the pace of AI development and the maturity of governance frameworks. Based on a survey of more than 2,300 senior executives, the report found that over 80% believe their organisations lack leadership, oversight, and employee readiness for AI adoption.
Brandon Nutall (pictured), chief digital and AI officer at Xceedance, said GenAI could play a critical role in expanding access to insurance by enabling more standardised and data-driven products.
“GenAI – as part of a data strategy – has the potential to supercharge the insurance industry,” he said. “Only 6% of insurable assets worldwide are covered – it’s clear we have a fundamental problem when the people who need insurance most cannot access it.”
He said one way forward could mirror banking sector strategies, where product standardisation and commoditisation have enabled broader customer reach.
While insurance carries added complexities, Nutall noted that recent technological advancements have made such transformation more feasible.
He also pointed to improved data management as a critical enabler.
“The industry has been hamstrung by a lack of standardised data – GenAI can enable us to rapidly process and standardise data to create more commoditised products to improve accessibility and drive industry growth,” Nutall said.
However, Taylor Fry’s report noted that some AI applications – particularly those involving automation in customer engagement or pricing – may require tailored controls.
Insurers are responding by refining policy language, adjusting underwriting approaches, and re-evaluating coverage boundaries related to AI. Silent coverage – where policies unintentionally include AI-related incidents – is a specific concern being addressed through clearer terms.
Broader guidance is also emerging to help institutions manage GenAI risks. The Financial Services Information Sharing and Analysis Center (FS-ISAC) recently published a report outlining eight steps for responsible AI use, from securing data access to ensuring vendor transparency.
Additionally, new insurance products are being considered. Nutall highlighted the potential for offerings focused on AI liability and digital assets. These would address emerging risks where AI operates independently or where cryptocurrencies and decentralised platforms come into play.
“Normally, if something goes wrong, the law looks for a person or company to blame,” he said. “But AI can make decisions on its own in unpredictable ways, making it harder to pinpoint who is at
fault.”
Digital asset and cryptocurrency insurance is also gaining attention as decentralised finance tools intersect with AI systems.
Nutall noted that although such technologies are still emerging, they present cybersecurity and operational risks that insurers may increasingly need to underwrite.
“We should aspire to an insurance industry which delivers financial security for all – embracing the possibilities brought to us through machine intelligence,” he said.