The artificial intelligence community needs to put a lot more effort and money into systems protection to avoid it costing billions more down the line, according to the founding director of the Center for Security and Emerging Technology.

Jason Matheny leads the center, a shop housed at Georgetown University’s Walsh School of Foreign Service to provide policy and intelligence analysis to the government. Prior to launching the think tank, Matheny was assistant director of national intelligence, and before that he served as director of IARPA — the Intelligence Advanced Research Projects Activity. He currently sits on the National Security Commission on Artificial Intelligence.

Speaking from that vantage Sept. 4 during a panel at the 2019 Intelligence and National Security Summit, Matheny said AI systems are not being developed with a healthy focus on evolving threats, despite increased funding by the Pentagon and the private sector.

“A lot of the techniques that are used today were built without intelligent adversaries in mind. They were sort of innocent.”

For Matheny, there are three main types of attacks developers need to consider: adversarial examples, trojans and model inversion.

Adversarial examples are attempts to confuse AI systems by tricking it into misclassifying data. By exploiting the ways an AI system processes data, an adversary can trick it into seeing something that isn’t there.

“[It’s] a technique that an adversary can use to confuse a classifier into thinking that, for instance, an image of a tank is instead an image of a school bus or a porpoise. It’s a way of creating sort of an optical illusion for a machine-learning system," explained Matheny.

While adversarial examples are generally used on fully developed AI systems, trojan attacks are used during AI development. In a trojan attack, an adversary can introduce a change in the environment in which the system is learning, which causes it to learn the wrong lesson.

A third type of attack, called model inversion, is used on machine-learning systems. With model inversion, adversaries basically reverse-engineer the machine learning in order to see the information that was used to train it.

“So if you have a bunch of classified data that is going into a model, you really should be protecting the model even after it’s been trained at the highest level of classification of the data used to train it,” said Matheny. “Please be careful with your trained models.”

Despite these three vulnerabilities, Matheny noted that less than one percent of AI research and development funding is going toward AI security.

“In some of the same ways that we’re now retrofitting most IT systems with security measures that are meant to address vulnerabilities that were baked into systems in the 1980s or earlier, in some cases we’re going to have to start baking in security from the start with AI, unless we want to spend billions of dollars retrofitting security years from now,” said Matheny.

Nathan Strout covers space, unmanned and intelligence systems for C4ISRNET.

Share:
More In Artificial Intelligence