The following article was written in association with Cowbell.
AI is nothing new to insurance, particularly in the cyber world. However, the rapid evolution of cybercrime and ransomware attacks means organizations must consistently level up, not just to stay competitive but to survive.
Cowbell’s Roundup Report 2024 found that larger businesses — those with annual revenues exceeding $50 million — face significantly greater cyber risk, experiencing 2.5 times more cyber-related incidents than their peers. Speaking to Insurance Business, Rajeev Gupta, co-founder and chief product officer at Cowbell, described AI as a double-edged sword. While it presents challenges, it's also a critical tool that organizations need to embrace.
“Using AI to detect fraudulent claims, for example, is no longer a nice to have, it’s a necessity. Because the bad guys are already using it,” Gupta said. “Claims are coming in with AI-crafted documents. Someone can easily create what looks like a legitimate ransomware attack, but all the artifacts are AI-generated. If a human is evaluating that claim, they might believe it’s real. That’s where we need AI. And if insurance companies aren’t using it, they’ll end up on the losing side of the battle.”
It’s a case of AI versus AI. Threat actors are rapidly upskilling and leveraging increasingly advanced tools to bypass fraud detection. While insurers also need to deploy AI, Gupta emphasized the importance of maintaining ethical boundaries.
“There are a lot of limitations on what AI can and cannot do,” he noted. “The more complex the risk, the harder it is to rely completely on AI. You still need a human in the loop, especially for complex risk understanding, underwriting, and claims.”
Gupta likens this human-AI partnership to a Venn diagram, where both elements overlap and enhance one another. AI can uncover patterns and insights that might be invisible to humans, thanks to its deep data processing capabilities.
“It can connect the dots in a way that only someone with 40 years of experience might,” he said. “AI can reach that level of expertise in just a few months. But there are things only humans can do. That’s why a hybrid approach is ideal. You get the best of both worlds and deliver the best possible service to your customers.”
The data backs up this view. Research from Duck Creek Technologies shows that 44% of consumers now prefer to interact with a human after a policy is in place, up from 35% in 2022.
“AI will get better over time,” Gupta added. “If 50% of underwriting is currently done by the system, that could rise to 60 or 70%. But the human will still be needed for complex and strategic decision-making. Nobody knows what the future holds, but I believe humans will always be needed for oversight, especially when it comes to bias, outliers, exceptions, and upholding ethical standards.”
Looking ahead at AI’s role in cybersecurity, Gupta stressed the importance of ongoing accuracy and fairness in assessments. Static models simply won’t cut it.
“Continuous assessment is key,” he said. “People talk about bias in AI — we may start with limited data, but if we continuously assess and bring diverse signals into the model, we can reduce that bias. That’s what keeps the model relevant. Because the threat actors are already out there using this tech, we need updated models that can detect new types of threats. That’s where continuous threat intelligence and business assessments are essential.”