As millions of people transitioned to remote work during the pandemic, criminals took advantage of the increased volume of online activity.

Scams proliferated, according to the FBI. Human traffickers and drug cartels, long seeking ways to circumvent regulators and law enforcement, refined their money laundering through online marketplaces and cryptocurrencies that afforded anonymity while connecting buyers and sellers, the U.S. Government Accountability Office reported.

To adapt to these threats, regulators and law enforcement authorities began to turn to artificial intelligence and machine learning to battle bad actors. These tools help officials spot trends and set rules to help banks and other institutions identify and report suspicious transactions.

Regulators, law enforcement and banks have similar priorities, though they often consider them through different lenses. Banks want to find bad actors and report them. Policymakers need to create legal guidelines to help banks spot them. Law enforcement investigators recognize that innovative technology provides the best tools for identifying and shutting down human traffickers, drug rings, and other crime syndicates.

The Anti-Money Laundering Act of 2020 laid out a framework to stop the exploitation of the data and technology that facilitates cybercrimes. The law has yet to come into full effect as regulators work out details. Meanwhile, the Securities and Exchange Commission proposed rules to crack down on financial cybercrimes, and Congress has warned that ransomware victims don’t always report attacks.

Regulators need to expedite their rulemaking associated with AMLA, and the SEC should do the same. Likewise, Congress should act to address the lack of consolidated data regarding ransomware and crime involving cryptocurrency and use discussions about the proposed ENABLER Act, which would amend AMLA to fill loopholes.

These efforts, while essential, will fall short unless regulators, law enforcement, and the private sector accept that the main vulnerability regarding cybercrime is people.

Around 95 percent of cybersecurity issues can be traced to human error, as the World Economic Forum found. A lack of cybersecurity experts is compounding the problem. The private sector needs around 400,000 more of these professionals to tackle new threats.

Even if government agencies and financial institutions could hire everyone they needed tomorrow, they would still need AI and machine learning to manage the deluge of cybercrime that is now routine. The massive scale of online activity and the scope of the cyberthreat today is too large for teams of people of any size to handle, even in the public sector. AI and machine learning can take on that work, with three unique benefits for regulators and law enforcement officials seeking to crack down on cybercrime in the post-pandemic era.

Finding the ‘unknown unknowns’

AI helps officials in the discovery process, finding so-called “unknown unknowns,” or problems that remain hidden from regulators, as well as patterns in data across multiple firms and systems that point to emerging threats and new behavior patterns.

Consider how so-called romance scams in Southeast Asia have reportedly escalated in recent years from individuals seducing victims online and stealing their money into sprawling criminal organizations that oversee human trafficking, forced labor, and violence throughout the region.

Law enforcement can assemble and pour over the data to discover who was perpetrating these crimes only one case at a time. Multiple jurisdictions and currencies make their job even harder. AI and machine learning, in contrast, augment their domain expertise, helping them connect the subtle clues in numerous data streams to reveal clandestine wrongdoing.

Reducing false positives

Ninety-five percent of suspicious activity reports are false positives. Banks are inundating the U.S. Treasury’s Financial Crimes Enforcement Network with irrelevant SARs, wasting resources that regulators and financial institutions could put to better use in finding real crime. Using broader data sets to capture all known risks and reduce alerts, AI can cut down on false positives by more than 75 percent, discovering crime more effectively so resources can be refocused on other, more genuine threats.

A virtuous cycle results, as they discover new crimes, regulators gain more knowledge and opportunities to update their approaches to catch new criminals.

Building tools to go after worst offenders

AI and machine learning can help law enforcement single out the worst bad actors and gather sufficient evidence against them.

Identifying unknown relationships and linkages between entities and people, spotting unusual changes in behaviors across entities, digital tracking of activities, and other investigative technologies help law enforcement pinpoint the most mysterious and elusive bad actors perpetrating the most damaging crimes. Within mountains of data, AI can recognize the genuinely suspicious patterns and inconsistencies that are the telltale signs of drug and human trafficking organizations’ money laundering, for instance.

In addition to automatically building new cases with data, over time, machine learning allows AI to gain knowledge from, and improve on, these investigative processes. Officials therefore have a better chance of catching the biggest fish as their AI gains ‘experience in the field.’

Mark Tice is the head of the public sector business at SymphonyAI.

Share:
More In Opinions