Artificial intelligence generative tools are emerging as 'versatile assistants' in financial technology with improved capabilities. But industry pundits have warned enterprises of the perils involved in using chatbots, and of overrelying on them in their operations.
The idea of applying artificial intelligence to front office services at WeLab - a Hong Kong-based digital lender - emerged several years ago. With the approach of the Lunar New Year, the company was going through a rough patch. Its hotline was ringing off the hook, inundated with customers' inquiries, while employees held their breath as they prepared to return to their hometowns for the most important festival on the Chinese calendar.
The conundrum was: Who would volunteer to stay behind to keep the company's services going? The idea of using AI immediately struck them. "Why not design an AI-powered chatbot that could handle most of our customers' inquiries, and allow everyone to go home?" recalls Simon Loong Pui-chi, founder and group CEO of the financial technology firm.
The team sprang into action, putting all their skills to the test in developing a chatbot that could deal with up to 90 percent of clients' calls. The AI tool turned out to be a hit. It was so successful that following the Lunar New Year, everyone could go home for the holidays without having to worry about work.
Loong says his team is now exploring ways to integrate large language models, like OpenAI's ChatGPT, into WeLab's chatbot technology, which has already been tested. With the rise of generative AI, the company hopes to continually improve its chatbot capabilities by providing more natural language responses to clients.
New technologies have sparked excitement among enterprises. A report by McKinsey and Company - one of the world's leading strategy consulting firms - says advanced technology could generate value for companies, from increased productivity of up to 4.7 percent in the banking sector's annual revenues, translating into extra revenues of between $200 billion and $340 billion annually. One area where AI could inject significant value is customer operations.
WeLab's AI-powered chatbot has processed more than 80 percent of 70 million dialogues on the Chinese mainland market. The chatbot is also responsible for providing investment suggestions in wealth management, which includes financial planning, investment portfolio recommendations and fund transactions. This has resulted in drastic cuts in manual labor and increased efficiency, says Loong.
Jasmine Lee - managing partner at EY Hong Kong and Macao - says generative AI systems have the advantage of using more advanced algorithms and accessing much larger databases, enabling them to produce more natural-sounding responses and save time for users.
However, it's important to recognize that ChatGPT doesn't guarantee the accuracy and reliability of its generated content, and the responses may be inaccurate, overly generic or inappropriate for the context, explains Lee.
Earlier this year, tests by economists at the Federal Reserve Bank of Richmond found that ChatGPT should only be used under human supervision. Other experts said that while the generative tool can correctly answer 87 percent of questions in a "standardized test of economics knowledge", "it's not flawless, and may still misclassify sentences or fail to grasp nuances that a human rater with experience in the field would".
Risk management
Several financial AI publications have hailed ChatGPT as a "versatile assistant" for various tasks, including selection of stocks and economics pedagogy. But they warned against the risk of relying exclusively on AI. Industry pundits have highlighted the significance of involving human decision-making alongside AI platforms, emphasizing the significance of AI-human collaboration in reducing associated risks.
"Overreliance on AI systems can lead to automation bias, where users place too much trust in the system and ignore other relevant information sources or their own judgment. This can result in errors or oversights," warns Jan Ondrus, associate professor of information systems at ESSEC Business School, Asia-Pacific, in Singapore.
Implementing AI in financial services also raises concerns about legal accountability, he says. "Chatbots are typically operated by multiple parties, including developers, platform providers and users, making it difficult to determine who is accountable for errors and consequences."
Sophiya Chiang, founder of Web3-enabled investment platform Deploy, and a member of the FinTech Association of Hong Kong, suggests that financial institutions using generative AI systems should ensure that human experts review and validate outputs for critical decisions or complex scenarios that require human judgment.
The Hong Kong Special Administrative Region does not have specific legislation on AI. Only guidelines issued by government entities, such as the Office of the Privacy Commissioner for Personal Data, are available. The guidelines cover seven principles - accountability; human oversight; transparency and interpretability; data privacy; fairness; beneficial AI; as well as reliability, robustness and security. The guidelines are not mandatory, and their effectiveness would depend on the willingness and collaboration of each participant.
Han Sirui, a research assistant professor specializing in finance and regulations at the Policy Research Centre for Innovation and Technology, the Hong Kong Polytechnic University, believes that, given Hong Kong's strategic positioning as a regional technology hub, implementing AI laws in the city is imperative.
"To achieve sustainable technology growth in the future, the government must consider the potential harm that AI tools may generate and take measures to counteract any negative effects," she warns. In her view, the absence of specificAI regulations and compliance requirements regarding risks could hinder financial institutions from leveraging AI to evolve their businesses.
Thus, the SAR government must establish "a clear AI regulatory framework that addresses critical issues, such as data protection, privacy, transparency, accountability and fairness in the financial industry", she says. Encouraging the adoption of ethical principles, like guidelines, and incentivizing financial institutions to follow the best practices in AI development and deployment are also viable options.
"Financial institutions must develop an AI strategy that enables them to leverage it and grow their business in the age of automation," urges Larry Cao, senior director of research at the US-based CFA Institute. "Governments and society, as a whole, need to evaluate AI's social impact and reach a consensus on relevant solutions," he says.
Adaptable, proactive policies
Ondrus at ESSEC Business School calls for flexible government policies to manage risks and ensure that generative AI systems benefit each stakeholder. "To keep track of the latest evolution, governments should encourage developers to be transparent about what their AI systems do and how they function. Besides, appointing an independent auditing body could help enforce necessary rules and precautions," he says.
According to the 2023 Global AI Index published by the British news website Tortoise Media, Hong Kong ranked 32nd out of 62 economies. However, its subrankings in government strategy and talent fell below 50th place, lagging far behind in categories such as infrastructure and commerce. The gauge evaluates the level of investment, innovation and implementation of AI in each country and region.
To stay abreast of the latest advancements in AI legislation, Duncan Chiu, a Hong Kong lawmaker representing the technology and innovation functional constituency, urged the government to adopt a more proactive approach to policy-making innovation, rather than solely relying on, following and learning from legislation in other economies, such as the United States and member states of the European Union.
The rapid advancement of generative AI tools has captured the attention of regulators worldwide. China, the US and the United Kingdom are discussing and drafting AI rules, while the European Union has taken the lead in establishing AI global benchmarks. It is considering introducing the world's first legislation governing AI - the EU AI Act - as soon as this year. Companies found to be violating these rules will be fined.
Meanwhile, China has drafted AI laws, which will be ready for review by the Standing Committee of the National People's Congress - the country's top legislature - this year, according to a document issued by the State Council last month.
The US Federal Reserve began discussions with banks recently to address the potential risks associated with generative AI. Fed Governor Christopher Waller noted that while AI has improved the efficiency of customer services, and helped banks to monitor fraud, it has created novel risks in detecting issues or biases in large datasets.
Waller cited the "black box" problem - the inability to discern exactly what machines are doing when they are teaching themselves novel skills. This issue "may lead to a decrease in the ability of developers, boardroom executives and regulators to anticipate (financial) model vulnerabilities", according to a 2020 paper entitled "Deep Learning and Financial Stability", co-authored by Lily Bailey and Gary Gensler, who now chairs the US Securities and Exchange Commission.
Paul Sommerin, partner and APAC head of digital and technology at Capco, says that from a market perspective, global regulations will be key to mitigating risks associated with AI's implementation. Another important factor is customer education, as AI requires large amounts of data to effectively learn and provide precise results.
"Proper education and awareness are necessary to ensure data privacy and protection, while leveraging the full potential of artificial intelligence," he says.
The World Internet Conference (WIC) was established as an international organization on July 12, 2022, headquartered in Beijing, China. It was jointly initiated by Global System for Mobile Communication Association (GSMA), National Computer Network Emergency Response Technical Team/Coordination Center of China (CNCERT), China Internet Network Information Center (CNNIC), Alibaba Group, Tencent, and Zhijiang Lab.