Skip to content

Artificial intelligence in wealth management: friend or foe?

By capturing, analysing and interpreting data through AI, wealth managers can deliver customised solutions to clients.

September 23, 2024

7 mins

Navigating the investment market is becoming increasingly challenging for wealth managers and their clients, with changes in the number and complexity of products growing, shifts in investor demographics and macroeconomic events fuelling pressures to reduce costs and improve margins. The need for agility – anticipating and reacting quickly to change – has therefore become imperative. At the same time, client engagement and demand for transparent, efficient and bespoke solutions is rising.

We can see that leveraging cutting-edge technology to generate optimised insights and analysis has consequently become an ingrained strategy for wealth managers. And at the heart of this is artificial intelligence (AI).

AI is proving an extremely powerful tool in enabling a range of enhancements across most aspects of our daily lives. Indeed, it is not an exaggeration to say that it is ultimately set to change the world as we know it. Its ability to capture, analyse and interpret data means it is positioning wealth managers to deliver customised solutions and a more efficient, transparent client experience.

Yet, as our industry embarks on the AI journey and looks to reap the benefits, implementing robust data privacy measures is a must to prevent AI’s data mining capabilities from being put to misuse by fraudsters, and ensure sensitive client information remains secure and inaccessible to interceptors.

The steadfast rise of AI

AI is progressively being applied to wealth management operating models to meet evolving client needs. In particular, automation, algorithms and predictive analytics are being used to support better decision-making, drive more personalised advisory services, streamline processes and enhance productivity. AI solutions also enable round-the-clock, real-time visibility across multiple channels, and greater automation of investment management, including client profiling, asset allocation, portfolio analysis and compliance. Furthermore, by producing detailed, accurate market insights almost instantaneously – something that would typically take a team of highly specialised experts days to create – AI provides significant efficiency gains, with clients benefiting from more time being allocated to higher-value activities by their wealth managers, such as client service and decision-making.

Client engagement is another area that can be considerably enhanced. AI can facilitate self-service modules and give users improved access to algorithms and trading tools. Moreover, the deep insights that AI generates make for enhanced, interactive, value-added client meetings with more personalised recommendations.

We are seeing generative AI (GenAI) as another form of AI that is starting to be explored by wealth managers. GenAI is able to understand patterns not just from numerical data, but from language-based information, and then use those patterns to create new, original output based on what it has seen and ‘learned’. The client experience is an area that particularly stands to benefit from GenAI. For example, it is estimated that productivity gains in sales and client service could increase by 30–40 per cent as a result of being able to dedicate more time to client service instead of performing administrative tasks.

Data security

The extensive benefits of AI within wealth management are undisputed. But as it becomes increasingly sophisticated and powerful, we cannot approach it without caution. AI, when used maliciously, can undoubtedly pose a risk to privacy and, in turn, financial losses, identity theft and other forms of fraud.

With reams of data now stored online across a host of different sources, if that information isn’t protected effectively, there is the potential for ‘weak links’ to result in security breaches – and this is exacerbated by AI, which can easily collate disparate pieces of information to create structured insights into a subject, business or individual. Imitation is also a growing threat.

AI-generated ‘voices’ are being used to impersonate individuals on telephone calls, for example, and are so sophisticated that they have been illegally used to authorise bank account transfers of huge sums. The ability for AI to create realistic voices and emails – and even potentially avatars on video conferencing calls – may seem like science fiction, but it is a real prospect that high-net-worth individuals (HNWIs) and wealth managers need to be aware of, and mitigate. Let’s note, with voice and email manipulation on the rise and fuelling concerns regarding how genuine shared information and interactions are, traditional face-to-face meetings are returning to prominence as a means of mitigating the risk of AI-generated fraudulent communication.

As wealth managers increasingly utilise AI for the benefits of clients, ensuring that platforms and systems are secure at all times is paramount to protecting the interests of clients.

Ensuring security

At the very heart of wealth management is trust. We know HNWIs entrust their wealth managers with their most sensitive financial data, including personal identification information, financial history, investment preferences and strategic plans. And this applies to specific regulatory requirements and local laws of the various jurisdictions in which they operate.

With AI regulation ramping up and the focus on data protection intensifying, now is the time for the industry to ensure it is positioned with the tools needed to effectively manage risk, and is fully prepared for new rules on the responsible use of AI that focus on areas such as accountability and ‘explainability’.

They will need to be able to demonstrate, for instance, how they collect and process client data and make decisions, despite GenAI models being very complex and sometimes difficult to understand. It is also important to factor in ethical considerations, involving the fair and unbiased application of AI algorithms to ensure that decisions made by AI systems do not discriminate against certain client groups.

Sub-standard systems, or providers that do not have adequate protocols in place, are in danger of breaking clients’ trust and leading clients to look elsewhere – not to mention incurring reputational damage and hefty fines as a result of compliance breaches. Any AI system should be designed with privacy considerations from the outset, including using cutting-edge encryption techniques and access controls.

Regular security audits and compliance checks should also be conducted to identify and mitigate potential privacy risks. Wealth managers should, for example, undertake routine penetration testing on their software, employing external, independent parties to attempt to professionally hack their systems in order to reinforce their security, support the identification of potential threats and provide actionable recommendations to mitigate any risks.

By challenging and testing security on a regular basis, clients can be confident that their wealth managers have strong, up-to-date controls and procedures in place to withstand fraud attempts and keep their data secure.

Teamwork makes the AI dream work

The adoption of new technologies themselves is not what defines a service provider’s capabilities; it is the way in which they are applied to elevate client experiences, improve decision-making processes and drive operational efficiency.

Is it a friend or foe? At Standard Chartered, we regard AI as an opportunity to complement, scale and support the attributes and services that we provide, as opposed to something that will replace them. We believe it is the marrying of this technology with human decision-making and face-to-face advisory that is key to optimising wealth management. For sure, the industry relies on the ability of wealth managers to forge a deep understanding of specific client needs, with client interactions requiring empathy and good judgement – something that cannot be replicated by technology, now or in the future.

Our strategy centres on delivering a personal, instant and seamless client experience by maintaining the right balance between human-driven interactions and technology. We are focusing on select areas where AI can drive particular value for our clients, such as customer chat and market insights. Importantly, no system is rolled out that hasn’t had the most rigorous checks to establish certainty of its security.

As AI continues to evolve, HNWIs need a wealth manager that remains vigilant and proactive in addressing data privacy challenges. This includes staying abreast of emerging regulatory requirements, investing in advanced security technologies and building a culture of privacy awareness among employees. Doing so will provide assurance to clients that their data remains secure, whilst experiencing the scope of revolutionary benefits that AI can bring.

Wealth managers who successfully integrate AI into their operations while maintaining a strong focus on data security and the irreplaceable value of human empathy are likely to find that AI is more of a friend than a foe.

Content created in partnership with Spear’s, this article was originally published on Spear’s.