How can financial institutions and the banking sector brace themselves for the escalating risks associated with generative AI, particularly as it pertains to deepfakes and sophisticated fraud schemes?
As criminals harness increasingly advanced AI technologies to deceive and defraud, banks are under pressure to adapt and fortify their defences. Deloitte's latest insights shed light on the potential surge in fraud losses, prompting a critical examination of the measures needed to safeguard financial systems in this rapidly evolving landscape.
In January, an employee at a Hong Kong-based firm transferred $25 million to fraudsters after receiving instructions from what appeared to be her chief financial officer during a video call with other colleagues. However, the individuals on the call were not who they seemed. Fraudsters had used a deepfake to replicate their likenesses, deceiving the employee into making the transfer.
Incidents like this are expected to increase as bad actors employ more sophisticated and affordable generative AI technologies to defraud banks and their customers. Deloitte’s Centre for Financial Services predicts that generative AI could drive fraud losses in the United States to $40 billion by 2027, up from $12.3 billion in 2023, representing a compound annual growth rate of 32%.
Generative AI has the potential to significantly expand the scope and nature of fraud against financial institutions and their clients, limited only by the ingenuity of criminals. The rapid pace of innovation will challenge banks' efforts to outpace fraudsters. Generative AI-enabled deepfakes use self-learning systems that continually improve their ability to evade computer-based detection.
Deloitte notes that new generative AI tools are making deepfake videos, synthetic voices, and counterfeit documents more accessible and affordable for criminals. The dark web hosts a cottage industry selling scamming software priced from $20 to thousands of dollars. This democratisation of malicious software renders many current anti-fraud tools less effective.
Financial services firms are increasingly concerned about generative AI fraud targeting client accounts. A report highlighted a 700% increase in deepfake incidents in fintech during 2023. For audio deepfakes, the technology industry is lagging in developing effective detection tools.
Certain types of fraud can be made more effective by generative AI. Business email compromises, one of the most prevalent forms of fraud, can result in significant financial losses. According to the FBI’s Internet Crime Complaint Centre, there were 21,832 instances of business email fraud in 2022, resulting in losses of approximately $2.7 billion.
With generative AI, criminals can scale these attacks, targeting multiple victims simultaneously with the same or fewer resources. Deloitte’s Centre for Financial Services estimates that generative AI-driven email fraud losses could reach $11.5 billion by 2027 under an aggressive adoption scenario.
Banks have long been at the forefront of using innovative technologies to combat fraud. However, a US Treasury report indicates that existing risk management frameworks may not be sufficient to address emerging AI technologies. While traditional fraud systems relied on business rules and decision trees, modern financial institutions are deploying AI and machine learning tools to detect, alert, and respond to threats. Some banks are using AI to automate fraud diagnosis processes and route investigations to the appropriate teams. For example, JPMorgan employs large language models to detect signs of email compromise fraud, and Mastercard's Decision Intelligence tool analyses a trillion data points to predict the legitimacy of transactions.
To maintain a competitive edge, Deloitte notes that banks must focus on combating generative AI-enabled fraud by integrating modern technology with human intuition to anticipate and thwart fraudster attacks.
The firm explains that there is no single solution; anti-fraud teams must continuously enhance their self-learning capabilities to keep pace with fraudsters. Future-proofing banks against fraud will require redesigning strategies, governance, and resources.
The pace of technological advancements means that banks will not combat fraud alone. They will increasingly collaborate with third parties developing anti-fraud tools. Since a threat to one company can endanger others, bank leaders can strategize collaboration within and beyond the banking industry to counter generative AI fraud.
This collaboration will involve working with knowledgeable and trustworthy third-party technology providers, clearly defining responsibilities to address liability concerns for fraud.
Customers can also play a role in preventing fraud losses, although determining responsibility for fraud losses between customers and financial institutions may test relationships. Banks have an opportunity to educate consumers about potential risks and the bank’s management strategies. Frequent communication, such as push notifications on banking apps, can warn customers of possible threats.
Regulators are focusing on the opportunities and threats posed by generative AI alongside the banking industry. Banks should actively participate in developing new industry standards and incorporate compliance early in technology development to maintain records of their processes and systems for regulatory purposes.
What are your thoughts on this story? Please feel free to share your comments below.