How AI could change the AML landscape

Guest article by Patrick Ryan, former CEO & Co-founder of KYCnet

What’s it all about, AIfie?

Artificial Intelligence. “A poor choice of words in 1954” according to Ted Chiang, multiple Hugo and Nebula awards winning writer and commonly seen as the preeminent successor to Isaac Asimov. When asked for his favoured term, his answer was simple: “Applied statistics”.

AI is applied statistics. AI is not intelligent, at least not in the everyday meaning of the term. AI is not self-aware, it doesn’t have random thoughts or self-reflections, it struggles with human individual and community evolutionary common sense and ethical reasoning, entirely lacks empathy or compassion and has a very limited understanding of the nuances of language and behaviour. It remains an artificial construct and is greatly susceptible to programmatic bias and error. Furthermore, AI cannot be held accountable for its decisions and AI algorithms can be difficult to explain and its outputs impossible to audit.

Having said that, AI, as understood as a branch of applied statistics, is incredibly powerful. AI can sift, sort and identify patterns, programmatically acquire and apply information and new skills, weigh and reason using algorithmic logic to analyse complex problems, and continually adapt to new situations by machine-learning from previous processing cycles.

Generative AI can create new and novel texts and images from cleverly written prompts and requests, leveraging algorithms and gigabytes, terabytes or even petabytes of data to generate desired, and sometimes unexpected, outcomes. These outcomes can then be fed back into the hopper, further analysed and interrogated with more and more specific prompts and requests.

More focused Extractive AI can be used to pull valuable information out of reams of structured and unstructured documents, images and data, allowing users to far more quickly identify patterns, find meaning and review actionable findings.

Emergent abilities, mirages and the end of the world (as we know it)?

AI will continue to improve quickly, seemingly at an exponential rate, and, already, some developers and the media are talking about difficult to understand and sometimes impossible to explain emergent behaviours or abilities.

In evolutionary terms, emergent behaviours and abilities can be easily seen in birds flocking and ant colony organisation – no one individual bird or ant is encoded with the flock or colony level rules, but together they manage massively complicated flight patterns or build physical structures and manage resource allocation. These successful emergent behaviours developed as part of an evolutionary hit-or-miss experiment over hundreds of millions of years. More recent human evolutionary emergent behaviours are still playing out – our ability to transfer formerly successful family and clan survival tactics and instincts to deal with a range of national and international political and global climate emergencies is evolving through trial and error and is, as yet, unproven.

The evolution of AI emergent behaviours is being observed in daily, weekly and monthly timeframes and are unbounded by common sense or ethical reasoning. Some of these unexpected behaviours and capabilities are welcome – e.g. novel approaches to problem solving, creative composition in language, music and art, self-improvement and process optimisation etc. Emergent behaviours are also sometimes dismissed as either incorrectly measured mirages, or reported as more sensational, speculative, AI doom-scenarios that are often variations on unrealistic Terminator-like fantasies.

While the former is common and the latter unlikely – for now – one overwhelmingly common negative emergent behaviour seems to be AI’s simple and confident presentation of entirely erroneous results, so-called hallucinations. These faults are largely based on the various models’ incredibly complicated programmatic limitations, with errors and biases being referenced and factored again and again in endless rounds of data-sets, algorithms and weights.

Clearly more measurements, controls and quality feedback loops are necessary – from better testing, peer-review, human intervention, analysis and continuous monitoring to, who knows, maybe even an AI version of Asimov’s Three Rules of Robotics?

AML – A Few Big Buts…

AI is a powerful tool that can be used to automate a wide range of tasks, but it’s important to reiterate that AI is essentially applied statistics. This means that it relies on algorithms to identify, extract and generate patterns and trends in data. Furthermore, with statistics thee are always outliers.

While AI can be effective for certain aspects of due diligence – see part three – it’s not always suitable for more stringent regulatory needs.

There are a few general reasons why AI isn’t a complete solution for regulatory due diligence. As discussed, AI algorithms are only as good as the data they’re trained on. If the data is incomplete or inaccurate, the AI algorithm will not be able to produce accurate results. AI algorithms can be biased, which can lead to inaccurate results. AI algorithms can be difficult to explain and audit, which can make it difficult to ensure that they’re being used fairly and ethically.

There are also a few specific regulatory reasons why AI in the AML – or any other regulatory – space needs to be used with great care. These are mainly related to human understanding and oversight and the handling of sensitive and highly confidential information.

Firstly, AI struggles with the nuances of human language and behaviour or the context of financial transactions. AML regulatory due diligence often involves understanding complex and nuanced information, such as business rationale and motivations of individuals and the purposes and relationships between different legal entities and natural persons. Transaction monitoring can necessitate human review and clearance of the most complicated money flows. AI algorithms are not yet capable of understanding and verifying this type of information in the same way that experienced humans can.

Secondly, AI is limited with regard to making complex judgments that require common sense and ethical reasoning. AML regulatory due diligence often requires making complex judgments about the risk of money laundering and terrorist financing within the context of the customer, the transactions, and the broader financial system – a very broad and complex set of factors.

Thirdly, and very importantly, AI may not be able to explain, and cannot be held accountable for, its decisions. If an AI algorithm makes a mistake, it can be difficult or impossible to determine why the mistake was made and how to prevent it from happening again. This is because AI algorithms are often complex, unaudited and difficult to understand.

Lastly, many AI models are not yet transparent or secure enough to be trusted with sensitive data. AML regulatory due diligence often involves processing highly sensitive and confidential data about individuals and businesses. It is important to be able to trust that the systems used to process this data are transparent and secure and that the information is not reused (e.g. as a learning (Large Language) model) outside of the controlling organisation.

Of course, despite these limitations, AI can still be a valuable tool for AML regulatory due diligence. AI can be used to automate tasks such as data collection and analysis, which can free up human analysts to focus on more complex tasks. AI can also be used to identify patterns and trends in data that would be difficult or impossible for humans to identify on their own.

Yes, AI is a powerful tool that can be used to improve AML regulatory due diligence. However, it is important to remember that AI cannot replace human analysts entirely. Human analysts are still needed to define policies, design processes and rules, provide oversight, handle expectations, make complex judgments and accept the accountability for the systems and decisions made.

It remains an important truism that you can outsource the work – but you can’t outsource the accountability… or the fine!.

An Irish solution for a global problem

AI is clearly impacting many human endeavours and industries, and the anti-money laundering (AML) and Know-Your-Customer (KYC) space is no exception. AI has the potential to greatly improve the effectiveness and comprehensiveness of AML & KYC processes. It can do this by further automating tasks and identifying patterns and trends that would be difficult or impossible for humans to spot and by improving the accuracy and efficiency of KYC processes and AML monitoring and detection.

This is very exciting and there are – theoretically – a great number of ways in which AI could positively impact the AML / KYC landscape. These include many of the following:

Automated risk assessments: AI can be used to automate ever more complex risk assessments of customers and transactions, thereby helping regulated firms and financial institutions to identify high-risk customers and transactions more quickly and efficiently.

Real-time monitoring: AI can be used to monitor vast numbers and combinations of transactions in real time for suspicious activity, helping to detect and prevent money laundering and terrorist financing more effectively.

Pattern detection: AI can be used to identify patterns and trends in massive amounts of structured and unstructured data that would otherwise be difficult or impossible for humans to spot. This can help agencies, regulated firms and financial institutions to identify new and emerging money laundering schemes.

False positive reduction: AI can be used to reduce the number of false positives generated by AML monitoring systems. While any automation implemented to solely reduce work needs to be done very, very carefully, reducing false positives can free up resources to focus on investigating exceptions, true positives and prosecuting real money laundering cases.

Automated document verification: AI can be used to automatically cross check and verify the authenticity of a wide range of government-issued IDs, certificates and other commonly-used KYC documents. This can be done by using OCR to extract information from both structured (ID cards, certificates etc.) as well as unstructured (utility bills, annual reports etc.), and then using machine learning to compare that information to known databases and online registry information.

Biometric authentication: AI can be used to authenticate customers by better use of often imperfect biometric signatures such as fingerprints, facial recognition, and voice recognition. In this way, AI can ease the onboarding by minimising failure-rates while quickly ensuring that customers are who they say they are.

Improved accuracy and efficiency: AI is tireless and can maintain the highest level of accuracy and efficiency in AML monitoring and detection. Accuracy and efficiency, as with the other examples, can help institutions comply with AML regulations more effectively and reduce their risk of fines and penalties.

Overall, AI has the potential to make a significant positive impact on the AML & KYC landscape. In theory, AI can help financial institutions to identify and prevent money laundering more effectively, reduce their risk of fines and penalties, and improve their compliance with AML regulations. That’s the theory and some of the above use-cases can be implemented quite quickly, others may take longer.

All of the above use cases can be seen as parts of a more comprehensive AML / KYC process. In order to get the best out of these use cases, firms need to ensure that they have suitable policies in place and that these policies are transposed into defined, repeatable and measured processes. Only by doing so can the benefits of AI be clearly demonstrated in not just doing due diligence faster, but also doing due diligence better – minimising false positives and false negatives and zeroing in on real issues.

As discussed earlier, AI needs lots of appropriate and more timely data. AI in AML even more so. Firms, however, face great difficulty in having all of the right information from the right sources at the right time. Customers want to lock away and protect their confidential information. Privacy policies, encryption, heightened security and regulations with regard to financial and PII (personally identifiable information) all add to the complexity.

However, in meeting these seemingly at odds requirements, the industry can embrace a more modern, customer-controlled, data sharing scheme, one with permissioned, secure and convenient access to customer data, while still employing high levels of data security.

I’d be remiss in failing to point to Aryza Validate and their onboarding and due diligence solutions as leading edge in firstly, addressing many of the above use cases, secondly, doing so by means of highly configurable and reportable process flows and thirdly, employing very advanced and secure “wallets” to request, remind and allow users to share and control their confidential AML/KYC information, documentation, evidences and so forth.

AI, in combination with, and implemented within, the Aryza Validate solution is a very exciting and very compelling new business value and due diligence driver.