Deepfakes are fake videos or audio recordings that are made to look and sound authentic with the aid of artificial intelligence (AI) technology. Deepfake technology is readily available and is rapidly improving. Pretty much anyone can create a deepfake to promote their chosen agenda, making it a dangerous tool if used maliciously.
To this point, deepfakes have been most prevalent in the realm of amateur hobbyists. One quick YouTube search will direct you towards countless spoof videos of politicians being made to say funny things. While that’s a light-hearted and common example, deepfake technology could just as easily be used to misinform the public about an event or manipulate shareholders in a corporate context.
“When hackers use AI and automation to create fake videos or recordings of people, and it looks or sounds like people are saying things that they never said – to me, that’s really frightening,” said John Farley (pictured below), managing director, cyber practice group leader, Gallagher. “They could have a world leader appear to say things that could potentially start a war. They could have a CEO appear to say things about earnings that could drive a stock up or down. It’s pretty wild when you think about the kind of harm that could cause and how a hacker could financially gain from some of that.”
If a cyber criminal used deepfake technology to manipulate a corporate earnings video, which was posted publicly on YouTube, and that spoof video then led to a stock crash for the company – how would the cyber insurance market respond?
“A situation like that might not be covered because many cyber insurance policies require certain triggers before coverage kicks in,” Farley told Insurance Business. “A policy might require a network penetration or a cyberattack before it provides coverage, but, in this case, all that’s happened is a manipulation of an existing video that’s already out in the public. It’s not like the client was attacked, so the cyber insurance policy might not cover that harm or that damage.”
When it comes to deepfake videos, it’s almost impossible to take complete preventative action. What companies can do is learn about the risk and try to mitigate any damage as quickly as possible.
Farley explained: “When it does happen, people need to recognise it immediately and take that video offline as quickly as possible. I’m looking at ways to get ready for this threat and I’ve been building relationships with vendors who focus on that mitigation space. As this threat evolves, it’s crucial for all good cyber insurance brokers to think about new ways clients can be covered for the risk.”