First report on AI highlights governance concerns

Regulatory body raises red flag

First report on AI highlights governance concerns

Insurance News

By Jonalyn Cueto

The Australian Securities & Investments Commission (ASIC) has issued a warning to financial services and credit licensees: ensure your governance practices keep pace with the rapid adoption of artificial intelligence (AI) or risk significant consumer harm.

This cautionary message comes in the wake of ASIC’s inaugural report examining AI usage across 23 licensees in sectors ranging from retail banking and credit to insurance and financial advice.  While current AI deployment remains relatively conservative, focused primarily on supporting human decision-making and boosting efficiency, ASIC chair Joe Longo stressed the urgency for robust governance frameworks as AI applications expand.

“Around 60% of licensees intend to ramp up AI usage, which could change the way AI impacts consumers,” Longo said.

He noted that without governance processes keeping pace, significant risks could emerge. “Without appropriate governance, we risk seeing misinformation, unintended discrimination or bias, manipulation of consumer sentiment and data security and privacy failures, all of which has the potential to cause consumer harm and damage to market confidence,” said Longo.

Gaps in clear regulations

The report, titled “Beware the Gap: Governance Arrangements in the Face of AI Innovation,” revealed a concerning disconnect between AI adoption and governance readiness. Nearly half of the licensees surveyed lack policies addressing consumer fairness or bias in AI systems. Even fewer have protocols for disclosing AI usage to consumers.

“It is clear that work needs to be done – and quickly – to ensure governance is adequate for the potential surge in consumer-facing AI,” Longo said. He urged licensees not to wait for AI-specific laws and regulations, but to proactively apply existing consumer protection provisions and director duties to their AI deployments.  

ASIC’s report highlighted the potential for a “governance gap” to emerge, where the drive for AI innovation outstrips the development of necessary safeguards. ASIC noted this gap could widen under competitive pressure, leaving consumers vulnerable to a range of harms, from biased lending decisions to manipulative marketing tactics.

The report also underscored the importance of due diligence when engaging third-party AI suppliers.  Licensees are considered as responsible for ensuring that AI systems, whether developed in-house or procured externally, meet legal and ethical standards.

Key takeaways from the report:

  • AI usage is accelerating: Expect a significant increase in AI applications across the financial sector.
  • Governance is lagging: Many licensees lack adequate policies to address consumer risks associated with AI.
  • Consumer disclosure is lacking: Few licensees have formal protocols for informing consumers about AI usage.
  • Existing obligations apply: Licensees must proactively apply current consumer protection laws to AI systems.
  • Third-party risk is critical: Due diligence is essential when using AI solutions from external vendors.

ASIC’s report calls for a commitment to robust governance frameworks that prioritise consumer protection and market integrity.

What are your thoughts on the use of AI in insurance? Share your comments below.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!