Generative AI and phishing attacks – what the insurance industry needs to know

Why MFA is no longer seen as a "silver bullet" solution

Generative AI and phishing attacks – what the insurance industry needs to know

Cyber

By Mia Wallace

Generative AI (GenAI) is reshaping conversations around liability, and cyber experts across the insurance and technology industries are warning that it could be weaponized by threat actors.

By putting advanced capabilities in the hands of a broader range of individuals and groups, including those with limited technical expertise, GenAI has the potential to open up a new threat horizon. Whether through the increased sophistication of phishing attacks, social engineering, malware creation, vulnerabilities exploitation or the evolution of spam and botnets – GenAI lowers the barrier to entry for cyberattacks.

Looking at phishing attacks for example, which remain the most likely form of cyber incident due to their relatively low-cost and high effectiveness, the surge in GenAI applications is at least partly linked to their surge and increased sophistication. This is driven by the low barrier to entry as technology advancements enable threat actors to bypass security measures, including multi-factor authentication (MFA).

What is multi-factor authentication (MFA)?

At its most basic level, MFA operates as a security mechanism requiring users to provide at least two verification factors in order to access an online resource. Designed to enhance the security of an application, MFA has traditionally been enforced or encouraged by insurance companies in order to make it harder to gain unauthorised access to clients’ systems.

Yet in GenAI, MFA appears to have met its match. “There is a real school of thought that this threat is coming and that it’s going to really impact our threat landscape,” said Helen Bourne (pictured left), partner at Clyde & Co. “Phishing attacks, for instance, are probably the highest they’ve ever been. And ransomware, which dropped off a little bit last year, is now back with a vengeance.”

It’s becoming increasingly difficult to identify a phishing attempt. The technology is developing to a point that the volume of phishing campaigns which can be deployed at speed is going to increase exponentially, which represents a threat to companies already finding it challenging to mitigate this risk. “The threat actors have now developed their MOs in such a way that they deploy phishing campaigns deliberately designed to circumvent MFA procedures,” Bourne said.

Threat actors are now sending emails designed so that people have to authenticate them – and their credentials can then be stolen. “Another factor behind that is that while laptops and their systems are within a secure environment, very often people use their own phones which circumvent those protections, so there’s a lot less end-point monitoring going on,” Bourne added.

“A risk-based war”

Rory Yates (pictured right), SVP of corporate strategy at EIS highlighted that AI is “bubbling with risk”. “We’re caught in a risk-based war,” he said. “The barriers to ‌bad actors using these new and highly available technologies to deep-fake, manipulate images, and get past MFA protocols are beyond tempting. Many in the industry are utterly convinced there’s lots of it happening - way more than they can fully quantify.”

In addition, hackers and fraudsters are also using AI to supercharge things like social hacking, where they assume the identity of someone within the business to gain more details through social engineering. An extreme example of this was reported by Uber, where a threat actor assimilated a member of the Uber security team and infiltrated Slack, getting an employee to approve their login via the MFA tooling.

How can the insurance industry mitigate the threat?

Yates supplied some strong examples of how the insurance industry can mitigate its risk profile, highlighting that: “Those insurers best able to work with multiple partners on advanced MACH-based core systems will have access in real-time to fraud bureaus, data sources, and a multitude of tools that, in conjunction, will make the detection of fraud and hacks far better. Equally, they’ll be able to process or orchestrate these and remove any bad threats.”

Bourne shared that many of the clients her team assists with breach responses do have strong compliance and governance measures in place. The issue is a lack of awareness among individuals of their own risk profiles, and a lack of continuous training and development. “You can never do too much,” she said. “Where people have deployed very comprehensive training, doing that once a year now is not enough. In the space of those 12 months, the risk has moved on.”

Another challenge facing the industry is that insurers can rarely afford to fail fast, and so are instead tasked with “learning fast” in the face of huge amounts of change. “There’s a reason we haven’t seen the same maturity curve we have in other industries in insurance,” Yates said. “It’s significantly scaled in legacy, which makes it even more complicated and often impossible to adapt.”

What is clear is that collaboration between insurance businesses will be the key to offsetting these risks. Going forward, the industry will need to find new ways to share knowledge and best practices in a bid to enhance their own, their brokers’ and their clients’ cybersecurity measures and to keep pace with the changing regulatory landscape, while building risks models that accurately represent cyber risk as it stands today. As Bourne put it, this isn’t just a chance to combat the risks of AI, but to “unlock its potential to make insurance better.”

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!