Over the last couple of decades, anti-money laundering (AML) has taken centre stage in the banking world. Nowadays, AML drives strategic planning and organisational structuring. AML concerns keep many a manager up long into the night, as the risks are huge, the criminals ever more enterprising, and the penalties for infractions can be potentially devastating. While the prevention of money laundering (and its close relative, terrorist financing) are paramount, the weight and risk faced by financial institutions may feel onerous to many. Luckily, the banking landscape is changing rapidly, with automation and AI making the burden significantly lighter to carry.
Banks and financial institutions face a two-pronged problem. On the one hand, the pace of digital payment is growing exponentially. Much of the world’s trade is now conducted through purely digital conduits, with over 876 billion non-cash transactions expected to take place in 2021, according to the World Payments Report, published by Capgemini and BNP Paribas. Similarly, PayPal says that by the end of 2019, 2.1 billion customers will have used a digital wallet, a growth of 30% since 2017.
Not only is the volume of digital payments and users growing, so is the speed of transactions. In 2018, the European Central Bank (ECB) launched its Target Instant Payment Settlement (TIPS) system, which uses funds held by central banks to settle payment individually in under 10 seconds. The US Federal Reserve is planning on unleashing its own equivalent system in 2020.
The increases in speed and volume are of course good news for the bottom line, but require significant resources to handle effectively. These resources something many in the banking industry are struggling to adequately provide. The industry is shrinking rapidly, with bank closures, mergers & acquisitions, and a massive reduction in the workforce dominating headlines in the last decade. With the squeeze on resources, many banks would have struggled to keep up with the increased workload regardless of any other constraints, but here they are faced with the second prong: AML.
AML regulations have grown thick and convoluted in recent decades, and with penalties as severe as truly massive fines and personal liability for offending compliance officers, it is taken extremely seriously. And for good reason. Fraudulent and criminal activity is costing the global economy many billions each year, with the lighter end of the spectrum meant to merely enrich the perpetrators, while at the other lies terrorist financing and socially damaging criminality. Nevertheless, it is a significant strain on banks’ already constrained resources, directly at odds with the growing pace of global digital trade.
To alleviate these pains, bankers and financiers of all varieties are scrambling to adopt the newest technologies to combat money laundering effectively, efficiently and with minimal costs. For this, AI seems to be the answer, and everybody wants a piece of the action. Matt Mills, chief commercial officer at Featurespace, a company that uses adaptive behavioural analytics to detect anomalies in real time to prevent fraud, told Euromoney: “I challenge anyone to find a fraud prevention company that doesn’t have the words ‘AI’ or ‘machine learning’ somewhere in its description”.
Machine learning, one of the tools underpinning the AI fight with fraud, means the use of algorithms and statistical models to allow computers to perform tasks without specific instructions. In the context of payments, this means allowing computers to make decision related to AML compliance with no human intervention. While letting go of control is a scary prospect for many a financier, it may be the only right thing to do for effective AML implementation, but to reduce the number of fraudulent transactions and, equally as importantly, to reduce the numbers of false positives, of absolutely legitimate transactions getting flagged as suspicious.
Current statistics indicated that for every fraudulent transaction stopped by a bank’s compliance team, some 20 legitimate transactions are prevented from going through by understandably overcautious compliance officers. Not only does this represent a serious hit to the bank’s bottom line, it wastes whatever precious resources are at the team’s disposal.
The way compliance tracking works now is that someone, somewhere along the line, will flag a transaction as suspicious for any number of reasons, with the explicit aim of minimising personal and institutional liability. This transaction would need to be investigated thoroughly, in a lengthy process that can last anywhere from a few days to several months. The cumulative resource drain is palpable, and the end result is that transactions are often rejected not due to any illegality, but because it is simpler, quicker and cheaper to do so. It is simply easier to suspect everyone and reject transactions outright.
With AI systems, this process can take an entirely different shape. Mills says: “If you start from a position where you consider everyone to be a good consumer and then place anomaly detection on top of this, machine learning will be more accurate in identifying the suspicious behaviour and, subsequently, more fraudulent transactions. AI turns the whole process on its head.”
Machine learning algorithms learn from human behaviour, create and continuously improve user profiles and use this information to validate transactions. According to proponents, behavioural biometrics of this sort are incredibly powerful at detecting fraudulent activity. The basic principle is that similarly to other forms of biometrics, our behaviour when using technology is absolutely unique. Anything from the speed of reading to the way you may jiggle the mouse before clicking can be quantified and collated in an individual profile. Any deviation from this profile is cause for alarm.
Where this technology shines are with onboarding and transaction verification. Or rather, whenever a known user’s identity needs to be verified. A distinct change in a user’s behaviour is serious cause for alarm and indicates potential fraud, with someone pretending to be a user they’re not.
Unfortunately, AI cannot provide everything we want. When it comes to the cross-border and B2B space, AI is limited in its uses. While businesses demand increasingly faster account opening and onboarding, the entirety of the process can’t be automated. The problem stems from a difficulty in standardising. Variations in geography, type of business, corporate structures, and even the individuals involved mean that a risk profile must be created for each case individually. Even if the processes could be automated to a higher degree, the risk to reward ratio may mean that the investment in AI isn’t sufficiently attractive. Simply put, financial institutions are rightly anxious about an automated system messing up in complex cases that could lead to massive fines or worse.
Moreover, there exists a question of accountability. “When a decision is made by AI, how are you then able to find the exact reason behind why a transaction is not stopped when it should have been – other than to blame it on the algorithm?” Benoît Desserre, head of global transaction banking at Société Générale, told Euromoney. “Using AI makes it very difficult to audit payments. At the early stages, if you have one algorithm, this might be easier, but down the line when you are dealing with millions of payments and multiple algorithms, how easy will it be then?”
In short, yes, AI and automation are providing a much-needed breathing room for banks, financial institutions and fintechs looking to alleviate some of the AML burden. However, they are no panacea. Real-life, human bankers will stay with us for a while longer. And for those looking for banking with a friendly face, that may not be such a bad thing after all.