Balancing risk and reward: Deploying AI in the fight against financial crime
AI is turning into an essential tool to combat illegal financial activities. However, with great technological power comes greater responsibility.
With deepening global connectivity, financial crimes have grown bigger and more complex by the year, amounting to more than USD 5.8 trillion annually — or almost 7 per cent of global GDP. Financial Institutions (FIs) are on the frontlines of the fight against it, in collaboration with regulators, industry bodies and technology companies.
FIs have invested billions of dollars in technology to help identify risk patterns and intercept financial crimes, and many are now adding artificial intelligence (AI) tools to their arsenal. AI’s ability to analyse large datasets and identify anomalies in transactions, as well as patterns in customer behaviour, makes it a powerful tool for risk management and meeting compliance requirements. However, there are practical and ethical considerations when deploying this technology.
A changing crime landscape
Whilst the digitisation of financial services has made transactions faster, easier, and more convenient, it has also underpinned an evolution of financial crime – enhancing its sophistication. The rise of online payment methods, for example, has created new avenues for increased fraud, money laundering, and terrorist financing, amongst other crimes. Hyper-connectivity between markets adds to those risks as transnational criminals can transfer illicit funds across borders in seconds. Cybercrime, which does not respect physical borders, further complicates investigations and makes it harder to recover funds.
“There are a number of avenues to address this challenge,” says Caroline Ngigi, Global Head, Conduct, Financial Crime and Compliance Advisory, Transaction Banking and Africa, Standard Chartered. “One is action by governments and regulators, who are responding by strengthening both domestic and cross-border laws and striking strategic partnerships to enable information-sharing aimed at fighting financial crime. Then there is the option of using the same emerging technologies, such as AI, to stymie these bad actors.”
The promise (and reality) of AI, today
With their ability to process reams of data and identify irregularities, AI algorithms can be useful to screen for illicit transactions. As an example, Standard Chartered integrates AI and Machine Learning models for Name and Transaction Screening to ensure consistent and timely decision-making. This, in turn, enhances compliance with regulations and reduces the number of manual interventions needed compared to conventional technologies. This speed of execution also contributes to faster transaction processing.
Ngigi notes that AI models have also been proven effective in areas such as Sanctions Compliance. The number of global sanctions has increased significantly since the start of the war in Ukraine in 2022, with the World Trade Organization estimating that 12 per cent of global trade is now affected by sanctions. With rules varying across regions and evolving rapidly, many FIs are using AI models to spot potential breaches. In this area, it has proven useful to deploy AI tools, thanks to the definitive lists provided by governments and law enforcements. In addition, most FIs will have vast quantities of historical data that can be used to train the AI models. It is also worth noting that complexities introduced through sector-specific sanctions mean that existing tools must keep evolving, aided by greater human oversight as they learn.
Advanced surveillance and monitoring technologies can help banks stay ahead of criminals by identifying not just individual illicit transactions, but wider patterns of suspicious behaviour. By training models to detect outliers and unusual patterns, new and unique typologies can be identified ensuring that FIs take on a proactive approach to fight financial crime.
“So, rather than just chasing immediate violations, we can lead in detecting criminal behaviour and working alongside regulators and governments to curb it,” Ngigi says. As well as mitigating risks, AI tools can also help lower the costs of compliance in the long run. “If your AI tools are helping you predict the pattern of criminal behaviour, your human workforce can be deployed to work on more complex risk management matters,” she explains.
Navigating AI adoption: What to consider
Despite AI’s numerous benefits, FIs need to consider several important factors to ensure successful outcomes, starting with data quality. “FIs need very good data lineage to train, test and deploy the models effectively,” Ngigi says. Despite holding vast datasets, some FIs may be less nimble in deploying AI models as they contend with legacy and/or fragmented data architecture.
“If the current data architecture is not designed in a way that is easily readable and transferable, then it is going to be difficult to deploy these tools. Therefore, legacy data architectural constructs may lead to slower adoption of AI,” she explains.
With this in mind, FIs have made modernising legacy infrastructure a priority as they look to harness data faster and better. At the same time, concerns with data structure extend beyond FIs as the laws and regulations they rely on to train the AI models need to be in a structured format that is easily readable and transferable. Where regulations are nebulous and unstructured deploying AI will be difficult if not impossible, says Ngigi.
Another important factor is the periodic tuning of AI models to ensure that they remain accurate whilst maintaining expandability. This tuning should also account for newer and evolving threats in the industry to keep the models relevant. Continuous testing, feedback and advanced questioning of the models is also important. As highlighted by the Wolfsberg Group, FIs should carefully control the technology they rely on and understand the implications, limitations, and consequences of its use to avoid ineffective financial crime risk management.
“FIs looking to adopt AI tools also carry the responsibility of ensuring that these are used safely and with significant ongoing testing and monitoring,” Ngigi adds. As an example, biased or discriminatory AI models pose reputational and legal risks with potential financial consequences for FIs. These problems arise when models trained on historical data may reflect discrimination in their results. Regulators have warned against the potential bias when using AI models, as seen in the US on consumer credit denials using AI tools.
“We are custodians of our clients’ data and have a responsibility for how we use it. Where we use client data in any AI tool, ethics must be a priority, and where the data is of a sensitive nature such as personal data, then it carries a higher bar,” she cautions.
To stay ahead of the criminals, financial institutions need to build trust and confidence in the very technologies designed to combat them. Developing internal governance protocols that include ethical considerations for deploying AI, as Standard Chartered has done, is a key consideration. The bank’s Responsible Artificial Intelligence Standard safeguards customers and partners by ensuring that every AI use case deployed adheres to the pillars of fairness, ethics, transparency, and self-accountability. Standard Chartered also plays an active role in collaborating with regulators to shape guidelines for responsible AI use.
“We really wanted to hone in on ethics. Not because it’s prescribed, but because it’s the right thing to do by our clients, our shareholders, and our regulators,” Ngigi says.
These challenges will be amplified by the continuous evolvement of AI and generative AI and therefore, FIs should also invest in the right expertise. “Upskilling is required to develop the right oversight and robust control frameworks,” she argues. “With human intervention and accountability specific to ethics, we can manage this risk more effectively,” she notes.
The promise of AI in revolutionising the fight against financial crime is undeniable. As FIs modernise their data infrastructure and develop sound ethical frameworks, the use of AI technology is expected to grow. With that, FIs are encouraged to work alongside regulators and industry bodies, as has Standard Chartered, in the adoption of the tools and in building robust control frameworks. By presenting this unified front and strategically aligning efforts, FIs could turn the tide against financial crime.
South-east Asia’s renewable energy future faces hurdles, but…
The rise of innovative funding solutions underscores the region’s commitment to achieving its clean energy goals…