Amid the rush of the insurance sector to adopt artificial intelligence (AI) and machine learning to reduce costs and improve the way it prices risks, a prominent industry leader has warned that algorithm use also entails risks “not yet fully understood by industry or regulators.”
APRA board member Geoff Summerhayes sounded the alarm on AI at this year’s Insurance Council of Australia’s annual forum, where he described algorithm use as “an emerging risk within an accelerating risk.”
Summerhayes said that aside from the “opportunities that artificial intelligence and machine learning present for fine-tuning and innovation in risk assessment, underwriting, loss prevention, and customer engagement..., algorithm use also brings risks that are not yet fully understood by [the] industry or regulators,” Australian Financial Review reported.
“It is, if you like, an emerging risk within an accelerating risk,” Summerhayes said. “By removing human oversight from important decision-making processes, and instead relying on machine-to-machine interactions, governance and transparency become inherently difficult.”
Summerhayes said any flaws in an algorithm would be difficult to detect or fix.
“These risks increase with the use of machine self-learning techniques, which impart greater predictive power to algorithms but make them significantly more complex,” he said. “This opaqueness is already being targeted by cyber criminals seeking to corrupt either the algorithm, or data used to train it, in order to manipulate its conclusions.”
And it’s not just Summerhayes who is concerned about the potential dangers of algorithm use.
Stan Gallo, a partner at KMPG’s forensics unit, said the “deliberate compromise” of an algorithm could result in a wide range of unintended consequences – especially given that the rise of advanced data analytics and AI has led to an explosion of algorithm use, from loan applications and insurance underwriting, to chatbot technology.
“At a high level, algorithms take data, look for patterns or trends and then act on them for a specific goal or outcomes,” Gallo told AFR Weekend. “Biases can also come into play, leading to poor learning from the AI.”
Meanwhile, a new report, titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, warns that AI and machine learning are “altering the landscape of security risks for citizens, organisations and states,” which could give birth to new forms of cybercrime and political disruption in the next decade.
The report’s writers – 26 experts from renowned institutions that include Oxford, Stanford, Cambridge, and Yale universities – urged policymakers and technical researchers to be vigilant to prevent and mitigate malicious uses, AFR said.
“As AI capabilities become more powerful and widespread, we expect the growing use of AI systems to lead to the expansion of existing threats, the introduction of new threats, and a change to the typical character of threats,” the report said.
Another prominent voice decrying AI is Elon Musk, Tesla and SpaceX boss, who said AI is a “fundamental risk to civilisation” and “more dangerous than nukes,” AFR reported.