How generative AI is reshaping conversations around liability

Grappling with the limitations of technology

How generative AI is reshaping conversations around liability

Technology

By Mia Wallace

As the regulators struggle to keep pace with how quickly generative AI (genAI) is evolving, the question of liability takes centre stage. Earlier this year, AI-powered chatbots were in the spotlight when Air Canada was ruled liable for a chatbot giving a passenger bad advice, resulting in the airline being ordered to pay damages and tribunal fees.

On the ruling of the “unusual” Air Canada case, Helen Bourne (pictured left), partner at Clyde & Co, noted that companies are grappling with understanding the new definition of liability. There are now situations where no human is involved in the decision-making that’s under question, she said, and cases like this should give organisations cause to reflect on how to utilise new technologies based on a full understanding of their own exposures to its outcomes.

How can businesses reframe how they approach liability risk?

It’s clear that genAI is significantly reshaping how businesses must approach liability, bringing new challenges and considerations to the forefront. A primary concern is data management and intellectual property (IP). Generative AI systems often rely on vast amounts of data, incurring heightened risks of data privacy breaches and IP infringements.

It’s a risk that necessitates stringent data governance policies and robust cybersecurity measures to prevent unauthorised data usage and ensure compliance with relevant regulations​. “Make no mistake, we're still in the experimental phase of this next wave of advanced AI tools,” said Rory Yates (pictured right), SVP of corporate strategy at EIS. “Unlike prior tech revolutions, this one has evolved and scaled far quicker than anyone could have imagined. And liability, until now, has been a lagging topic of consideration.”

Yates noted that ChatGPT hit a million users in five days, and is now at c. 180 million users worldwide. “It’s mind-boggling,” he said. “You can’t pick up an industry rag without seeing headlines that suggest it’s everywhere, and everyone has already been using it for some time. The insurance sector is no different.”

Any conversation about AI is unproductive without registering the potential for inaccurate or biased outputs from generative AI tools. Businesses that rely on AI-generated content or decisions must ensure that these outputs are accurate and non-discriminatory, particularly in sectors such as healthcare or finance where legal liability could arise from negligence if a business fails to verify the accuracy of AI-generated information or if the AI system's lack of transparency leads to unforeseen errors.

AI and threat actors – an evolving threat

Further complicating matters is that the question of liability is not limited to inherent limitations or errors within the technology itself but extends to the question of how it can weaponised by threat actors. Core concerns hinge on how it can be used to create sophisticated deepfakes for malicious purposes, including financial fraud, extortion, and misinformation.

In addition, the democratisation of digital technologies through their commoditisation across the dark web is making sophisticated cyber tools more accessible to even the most unsophisticated cyber criminals. The implications of these stretch to the potential for impersonation, fraud, and the bypassing of authentication systems, creating a significant risk to businesses that rely on digital verification processes​.

Ultimately, it’s about understanding the limitations of technology. “Historically, we’ve always understood the impact of human error,” Bourne said. “Now we’re dealing with technology capable of producing outcomes we would never have contemplated. And we’re having to question whether that technology can be compromised by bad actors to achieve a very different outcome.”

Overall, as businesses continue to adopt generative AI technologies, it’s clear they must do so with a clear understanding of their own security frameworks and how these can be utilised to address emerging threats. There are several ways in which companies can look to mitigate these risks; from implementing comprehensive review processes, providing adequate training for employees using AI tools, or establishing clear accountability frameworks for AI-related decisions​.

Yates highlighted that another key driver in changing the liability conversation is regulation. “We all knew regulation on AI was coming,” he said. “In the US, insurers Humana, Cigna, and UnitedHealthcare are facing class actions from consumers and their estates for allegedly deploying advanced technology to deny claims. Acting responsibly is critical.”

These codes and standards help the industry plot a successful path forward by speeding up the development process and helping companies ‌avoid the risk of compliance teams stifling plans, or worse, regulators coming in and killing businesses.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!