U.S. government officials are increasingly turning to automated security to help keep their systems safe, but an over-reliance on algorithms can bring a false sense of digital security, according to an expert presenting at the Black Hat security conference.

“Often these end users do not understand the algorithms correctly. If you turn the parameters a little differently, the result has changed.” Raffael Marty, a vice president for corporate strategy at Forcepoint told Fifth Domain. “Every algorithm is going to find something anomalous in your data.”

Marty said that companies with large budgets are less likely to be affected by the limitations of algorithms because they can tailor the product. But companies who use algorithms without properly customizing their inputs and do not have additional resources risk a cascade of bad information, he said.

With an algorithm, “you might not know what the white space is. You do not know what you are missing,” said Marty.

The warning comes as more cybersecurity companies are investing in algorithm or machine based technology to defend their network.

The cybersecurity company Cylance, which relies on artificial intelligence, raised more than $120 million from investors in June.

Algorithms and machine learning have been used in everything from Wall Street investment to fashion to boost profits. But they have also led to some notable failures.

When the American Civil Liberties Union ran members of Congress through Amazon’s “Rekognition” facial recognition tool that uses “deep learning technology, it incorrectly identified 28 of them as people who have been arrested for a crime.

Marty said that to guard against the misuse of machine learning and algorithms, agencies should create a “belief network." A belief network is a tactic where a company asks a large group of experts questions that have to do with indicators of a coming cyberattack. Those indicators can be more useful than algorithms to guard against digital intrusions, Marty said.

Other experts have pointed out that artificial intelligence and machine learning could create more threats, but for different reasons than the ones Marty suggested. A group of 26 researchers argued in a February paper that artificial intelligence could make cyber attacks more malicious.

“The use of AI to automate tasks involved in carrying out cyberattacks will alleviate the existing tradeoff between the scale and efficacy of attacks,” the report said. “This may expand the threat associated with labor-intensive cyberattacks,” such as spearphishing.

Justin Lynch is the Associate Editor at Fifth Domain. He has written for the New Yorker, the Associated Press, Foreign Policy, the Atlantic, and others. Follow him on Twitter @just1nlynch.

Share:
More In Cyber