Asian finance weighs AI risks

IFR Asia 1342 - 06 Jul 2024 - 12 Jul 2024
10 min read
Asia
Daniel Stanton

Financial institutions and regulators in Asia Pacific are figuring out how to deal with the risks from generative artificial intelligence technology, as more banks and money managers integrate it into their processes.

A survey published by the Alternative Investment Management Association in February found that most hedge funds are using generative AI tools. According to the survey of 157 hedge fund managers conducted in Q4 last year, 86% allowed generative AI tools to be used to support their employees’ work.

"There are lots of use cases across the whole value chain, from portfolio construction to risk management and compliance,” said Kher Sheng Lee, co-head of Asia Pacific and deputy global head of government affairs at AIMA. That includes generating personalised investor interactions in a fraction of the time.

“It’s mindboggling how much of our industry still relies on pen and paper or Excel,” said Lee. “Every investor has their own specific requirements and requests for reports. If you’ve got AI plugged into your data, you could deliver bespoke investor insights in a fraction of the time, freeing you to focus on other high-value activities.”

Banks too see potential in generative AI to generate documents for things like anti-money laundering checks, as well as prospectuses for equity or debt offerings.

“AI will enable analysts to focus on deeper tasks, as it takes care of the routine things they need to run,” said the head of Asia investment banking at a US bank.

AIMA’s survey found most firms used the open access versions of generative AI tools, with ChatGPT the most popular, but around 35% use a company-specific version and 10% are moving to one, the survey found.

Fund managers in Hong Kong are at risk of being left behind, though, when usage restrictions kick in this week. Users in Greater China, along with some other jurisdictions like Russia and Saudi Arabia, will no longer be able to access ChatGPT from July 9, the app’s developer OpenAI told developers last month. Currently, users in Hong Kong can use workarounds like VPNs or phone numbers from other countries to access ChatGPT, but OpenAI said it would use “additional measures” to block API traffic from unsupported countries.

That means institutions based in Hong Kong will not be able to do any research and development work based on ChatGPT, though they will be able to use alternative generative AI platforms like Google's Gemini and technology companies like Microsoft are also building such capabilities into their platforms.

Larger firms may build their own corporate AI tool on top of the publicly available version for added security and to tailor it to their needs, while smaller firms will probably use the standard versions.

“We have different levels of access to ChatGPT, from 1 to 5,” said the Singapore head of a global asset manager. “I only use level 2. And our IT staff have set their own security measures around it.”

While generative AI promises efficiency gains, it also poses risks. A global survey conducted by consulting firm McKinsey across multiple industries from February to March this year found 44% of respondents had suffered at least one negative consequence at their organisations due to the use of generative AI. Their biggest concerns about the technology were inaccuracy of results and potential intellectual property violations.

In AIMA’s survey, 27% of respondents raised ethical concerns about using generative AI. “Using AI in investment strategies raises ethical questions, particularly around market bias and the potential for market manipulation,” wrote AIMA. “Moreover, Gen AI has increased the risks of deep fakes, phishing and fraud, and hedge fund managers should protect against these risks, or client relationships could be affected.”

Regulatory questions

Regulators in Asia are still figuring out how to balance the risks of generative AI with its potential to develop the industry. The financial industry began to adopt advanced analytical models in the days of Basel II, and seven or eight years ago progressed into machine learning, so regulators and institutions already incorporate model risk in stress testing. Generative AI, though, presents some additional challenges.

“We need a balanced regulatory approach that fosters innovation while ensuring investor protection,” said AIMA’s Lee. “Overly prescriptive rules could stifle adoption by smaller firms, hindering the industry's growth."

Hong Kong’s Securities and Futures Commission has been in touch with asset managers to discuss setting technical standards for the use of generative AI – something that is difficult to do while the technology is still developing so quickly.

The Monetary Authority of Singapore in January published a white paper detailing a risk framework for the generative AI sector, which warned that financial institutions need to be aware of potential biases in off-the-shelf AI systems.

As well as looking at risk mitigation, the MAS is also considering how the technology will affect current job roles and how Singapore can prepare its workforce to fill the new kinds of roles needed to support generative AI. Given the restrictions on ChatGPT in Hong Kong, Singapore looks likely to become a regional hub for AI development in the financial sector.

Kim Joo-hyu, chairman of South Korea’s Financial Supervisory Commission, last year encouraged financial institutions to adopt AI to make them more competitive globally, but warned of the risks of AI failures or “digital herding”, where market participants are led by the same input data to behave in the same way.

Elsewhere in the region, the Australian government published an interim report on AI in January, in which it pointed out that companies and individuals that use the technology are already subject to laws on intellectual property and consumer protection, for example, but acknowledged existing laws do not prevent AI from causing harm.

Therese McCarthy Hockey, a member of the Australian Prudential Regulatory Authority, told the Australian Finance Industry Association’s Risk Summit in May that regulated entities should have strong board oversight and risk management in place before they began experimenting with advanced AI. "Entities without such measures in place should only proceed with a high-level of caution – or potentially not at all,” she said.

Joe Longo, chair of the Australian Securities and Investments Commission, warned in a speech in January of the risks that generative AI models could be manipulated or poisoned with incorrect input data to produce "hallucinations" – nonsensical answers. He said ASIC is currently pursuing an action against an insurance provider which did not pass on advertised discounts to customers due to AI-related issues.

Some industry figures have proposed that such tools should follow an "AI constitution" dictating what they can and cannot do, while the European Union's Artificial Intelligence Act requires companies to complete an AI risk assessment before implementation.

Steven Claxton, head of treasury & risk, Asia Pacific, at financial technology provider FIS Capital Markets said that regulators might allow financial institutions to use "black box" AI models for processes like spotting suspicious transactions, but would probably demand more transparency for functions that could affect financial markets or create systemic risk.

Since regulators and institutions already have years of experience of dealing with financial models, the true risks of GenAI might be seen in other places, he said.

"The likelihood is that reputational and strategic losses are going to come from non-traditional parts of the institution like marketing, advertising and human resources, especially if HR uses AI to categorise candidates and racial or gender bias comes through," he said.

Regulators may themselves adopt advanced AI to do their jobs more effectively.

“The real power of predictive AI is going to be in supervisory oversight to give central banks and regulators early warning signals about the state and health of banks’ business models," said Claxton.

A spokesperson for ASIC said that while the regulator is closely monitoring how AI is affecting the safety and integrity of the financial ecosystem, "ASIC is also exploring the potential uses of artificial intelligence and other technologies to remain a digitally enabled, data-informed regulator".

What slows the adoption of generative AI may be something more mundane. Institutions wanting to adopt the technology need to account for how much additional computing power it requires to run. A Google search uses 0.3 watt-hours of electricity while a ChatGPT request consumes 2.9 watt-hours, according to a report published by the International Energy Agency in January.

“There’s the question of how you roll it out," said Claxton. "Institutions need to leverage cloud computing and data centres to run industrial scale AI, which may come at the expense of their ESG targets, given the power required to cool the servers.”