The Financial Markets Authority (FMA) – Te Mana Tātai Hokohoko – has released new research on the adoption of artificial intelligence (AI) across New Zealand’s financial services industry.
The study, part of the FMA’s occasional paper series, surveyed firms in insurance, asset management, banking, and financial advice, aiming to assess both the current use of AI and the industry’s plans for future implementations.
FMA chief economist Stuart Johnson said the research explored both the potential benefits and the risks that come with AI integration to understand how the tool is being used in the financial sector today and how it will be applied moving forward.
“We sought to understand both the benefits and the risks to inform more oversight,” he said.
According to Johnson, while AI is seen as a transformative tool in financial services, it also introduces new challenges, particularly in terms of governance.
“Our findings emphasise the need for a balanced approach to harness AI’s benefits while addressing governance and risk concerns,” he said.
The report identified critical areas for attention, including data quality, technology selection, and proper documentation as essential steps in managing AI-related risks.
These aspects are considered key to the ethical and secure use of AI in the financial services sector.
Although the FMA takes a neutral stance on technology, Johnson emphasised the importance of responsible innovation.
“We believe that New Zealanders should have access to the same technological advancements as those in other countries,” he said, adding that AI integration must be done with a focus on managing risks appropriately.
To foster ongoing discussions, the FMA will host a roundtable on Oct. 1, 2024, with participants from the study to further examine the use of AI and generative AI (GenAI) in New Zealand’s financial services industry and discuss how firms are managing emerging risks.
As AI continues to be adopted in legitimate financial sectors, a recent report from cybersecurity firm TrendMicro highlighted the increasing use of GenAI in cybercrime.
The report, updated on July 30, warns that the use of GenAI for criminal activities is accelerating at a rapid pace.
Researchers David Sancho and Vincenzo Ciancaglini pointed to a rise in the availability of large language models (LLMs) designed for malicious purposes. These models are being promoted on encrypted platforms like Telegram, offering users unrestricted responses to harmful queries.
Unlike commercial AI systems like ChatGPT and Google’s Gemini, which are programmed to block unethical requests, these criminal LLMs are specifically designed to support illegal activities.
The report also highlighted a resurgence of earlier criminal AI models such as WormGPT and DarkBERT, which have returned to underground markets in updated versions. These LLMs, once believed to have been discontinued, are now being offered with new features, including voice-enabled functionalities.
In addition to the resurgence of older models, new LLMs like DarkGemini and TorGPT have emerged. Although their capabilities mirror those of other criminal AI tools, their ability to handle image processing adds another layer of potential misuse in cybercrime.
The researchers further noted an increase in deepfake technology being used in criminal activity, warning that as this technology becomes more accessible, it could be increasingly used to target individuals.