Artificial Intelligence Can Accurately Identify Patterns and Typologies Indicative of Financial Crimes

Global financial institutions (FIs) continue to serve as the first line of defense for law enforcement against various financial crimes such as money laundering, terrorism financing, fraud, tax evasion and more. FIs rely on transaction monitoring systems (TMS) to maintain their on-going anti-money laundering (AML) programs. However, these systems too often fail to flag transactions that are indicative of serious risks for financial institutions. Legacy TMS rely solely on rules-based scenarios which are static.  Meaning, if a financial crime does not violate a specified rule, the TMS simply will not flag it, allowing the suspicious transaction to possibly slip through the cracks. 

In other cases, TMS is known for creating an overabundance of false positives, or alerts that inaccurately identify a transaction as suspicious. In fact, according to industry estimates, approximately 95 percent of all TMS alerts are false positives. These frequent alerts are flagged and then dispatched for further review by human investigators and analysts, whose distinct skill levels and training are vital to thwarting financial crimes. Unfortunately, this also means there’s the potential for an investigator’s biases to possibly impact the outcome of an investigation. For example, confirmation bias could occur when an investigator flags a transaction, such as a payment from Mexico to Russia, as suspicious based on his or her own pre-existing beliefs. 

New technologies and advancements in data science including artificial intelligence (AI) and machine learning, are making good on its promises to significantly improve the detection of financial crimes and efficiencies of investigative staff.  Unlike TMS, AI-enhanced systems can detect patterns of behavior and analyze the intent of those patterns to better identify suspicious activities.  For example, transactions that do not follow the usual patterns and directions, such as payroll activity and industry connections, may not be flagged by a TMS but would be identified with an enhanced AI solution. AI can identify patterns and typologies in sets of data that TMS and human investigators simply don’t have the time or ability to catch. Using this information, AI technologies would then monitor every transaction processed by a bank and predict future anomalies.

Think about how round dollar transactions would be handled in an AI-enabled environment. FIs are warned that these transactions are indicative of illicit drug payments. However, round dollar transactions are also very common. An AI solution can quickly check the round dollar transaction flags from the TMS against other indicators such as non-complementary lines of business, KYC data, history of transactions between entities and structuring behaviors to eliminate false positives.

Some industry observers have expressed concern that innocent, law-abiding persons could be wrongfully swept up in a money laundering investigation just for being a relative of a criminal or for visiting a country designated as a state sponsor of terrorism. A recent article in American Banker, titled “Can AI spy financial crime without implicating innocents?” tackles this topic. The author of this piece noted that patterns of transactions or behavior among law-abiding customers might mimic money laundering or some other financial crime, which could implicate them as doing something nefarious. In the article, subject matter experts, including QuantaVerse, countered by explaining that AML alerts don’t necessarily lead to criminal investigations or the filing of SARs.

Our founder and CEO, David McLaughlin, notes in the above-mentioned article that AI systems utilize identity verification data from a variety of providers for analysis against suspicious transactions. “Those databases are great, but they have different pieces of information about individuals and entities,” he says. “You have to look at the whole landscape of digital clues: Does this person have some adverse media about them? Is there information on the Deep Web that would indicate there’s risk around the person? By using machine learning, you can begin to put pieces of the puzzle together, and the more items of truth you find that confirm an identity, the more your confidence level can go up.”

If you combine the identity verification data with transactions, along with the typology that indicates terrorist financing, and then find that the transactions have no economic justification, suddenly the case becomes more interesting. Think of it as  a mosaic that you’re examining.

On the regulatory side, McLaughlin provides the following closing remark around whether FIs should be deputized to help law enforcement: “Are we realistically going to take every transaction that goes through every financial institution and give that to FinCEN, which doesn’t have a budget to look at it and probably would have the same challenges around finding human capital that can do it? Banks utilize our financial ecosystem that our society has set up to make profit. We should hold them accountable. I don’t think it’s asking that much for them to do that.”

Traditional AML programs that rely on legacy solutions like TMS tend to flag high volumes of false positives. New technologies such as AI and machine learning can comb through large amounts of real-time data to identify patterns and typologies that investigative teams may not have the capacity to see.

AI solutions can also help reduce TMS’ red flags so FIs can investigate a smaller number of only the most alarming transactions.

Leave a Reply

Your email address will not be published. Required fields are marked *