THANK YOU FOR SUBSCRIBING
The Anti-Money Laundering (AML) profession is rife with buzzwords and acronyms, with AI (Artificial Intelligence) currently dominating conversations. A close second, and for years the front-runner, is RBA (risk-based approach), which has been a standard for AML compliance for decades. For a term that is so pervasive in the industry, and such a cornerstone of what AML professionals do, it is interesting to note that there is very little agreement as to what a RBA truly entails. Some in the industry have even gone so far as to argue that the RBA is a fiction, and that the industry continues to suffer from a zero-risk mindset; that is, all identified risk must be mitigated, and resources allocated accordingly, rather than to where the risk is greatest. The detractors, alas, are probably right in that industry spends more time discussing RBA than putting it into practice.
Understanding the Risk
Most financial institutions (FI) do have a risk assessment process. For AML, a risk assessment is mostly about understanding the FI’s customer base and product set. The customer base of a community bank will tend to act differently, and more homogenously than a larger institution. Customers of a community bank, with a handful of branches in a local geography, are probably going to use very basic services, such checking and savings accounts, with activity limited to cash, check and an occasional domestic wire. Counterparties for these transactions are most likely in the same or close geography, which reduces the risk to the bank for processing. By contrast, a larger institution will have a more diverse customer base, spread across a larger geography.
In applying the RBA, a FI would allocate resources to the area of highest risk. For instance, a real estate developer, while more profitable, may pose a bigger risk to the FI than a fast food employee. The developer regularly moves large sums of money amongst a variety of counterparties, some outside the US, and may be politically well connected, all things that add risk to the FI. Because this customer poses a higher risk, the FI may elect to spend funds monitoring that customer’s account on a periodic basis, and only look at the other account should something concerning come to the FI’s attention.
Current State Risk Assessment
Today, most risk assessments are a “check the box” exercise. Generally annual point-in-time reviews, assessments are often done on a part-time basis by BSA Officers who already have full-time jobs. The end result is a report to the FI’s executives letting them know where the risk is, or, more accurately, was last year when the data was pulled. Box checked.
“The first step to a truly effective risk assessment is current data. AI models can be trained to look for emerging anomalies that the straight percentage rules may miss”
For a risk assessment to be truly effective, and allow for a RBA, the assessment should be the beginning of the process, not the end. In performing the assessment, the BSA Officer should be able to identify areas of higher risk, and shift resources to meet that risk, rather than just report on where it is. More likely, however, the end result is a report that will note areas of heightened risk, but no changes to monitoring activity will take place.
Future State Risk Assessment
The first step to a truly effective risk assessment is current data. Risk assessments today are based on data collected over the previous year up to the point the assessment started. By the time a report is issued, the risks assessed are already historical, and any emerging trends have become historical ones. Rather than aggregating data periodically, FIs should build dashboards identifying areas of risk, updating frequently, allowing the assessor to see trends as they emerge and react accordingly. It is hard to overestimate the amount of data that goes into a risk assessment, as the FI is essentially trying to understand all the ways money can enter and leave the institution. Given this, the FI should include automated monitoring, which will trigger alerts should there be changes in customer activity or product usage.
Advanced technology can also assist here. AI is very good at finding patterns in large data sets that a human may miss. AI models can be trained to look for emerging anomalies that the straight percentage rules may miss. The problem with seeing small trends in large data sets is that the reviewer may not understand why the issue is suspicious. GenAI could add context to the alert, writing a narrative that helps the reviewer understand the issue.
The Way Forward
Changing the process, however, will only go so far. Once identified, a decision needs to be made on mitigating the risk. As noted in the AMLA, FIs should be moving resources to higher risk areas and away from lower risk issues. FIs generally do a good job of allocating resources to higher risk areas, but this is often additive. When a highrisk issue is identified, new resources are requested to help mitigate. In order to be more nimble to meet emerging risks, the FI should look to move resources away from lower risk areas, rather than asking for more resources to fight new ones. This would be done with the understanding that lower risk does not mean “no risk,” and by moving resources away from that risk, the FI may be allowing nefarious activity to go on un-reported.
Read Also