In this digitally connected world, there isn’t much that is safe anymore. This even extends to artificial intelligence and algorithms.

There’s already research into countering AI by tricking it with different data inputs, according to James Harris, chief technical officer of the Defense Intelligence Agency, who spoke during a presentation at the DoDIIS Worldwide Conference in St. Louis, Missouri, on Aug. 14.

Harris elaborated on these threats to AI, telling C4ISRNET following his presentation that what he’s seen in other spaces is that noise and other inputs can be added that make the machine respond.

When asked if these can be added or injected remotely to affect what the machine presents to the human, Harris said he was unsure about the remote aspect but that anything is possible in cyberspace.

Once a machine learns a mistake, he wondered, how does it unlearn it, and how does one make sure the machine is learning a comprehensive set of data and that humans aren’t teaching it biased information?

From the perspective of AI and machine learning, the computer would need to be trained in a particular way.

[Here’s how technology can help unburden DIA analysts]

Moreover, in the government and intelligence space, the analyst must know how the machine presented its answer, often referred to as explainable AI. This is one of the big challenges Harris identified in this space, as many times creators of deep neural networks can’t say why the machine came out with its answer.

“In our line of business, that’s unacceptable,” he said. “Analysts need to be able to explain why the machine came out with the answers if they’re going to use that answer.”

For example, he provided an anecdote about a recent demonstration of the accuracy of a deep neural network to see if the machine could distinguish the difference between a dog and a wolf.

The machine was able to tell the difference using provided photos, but when analysts dug into how the machine generated its answer, they discovered the computer learned that wolves have snow in the picture and that’s what it was keying off of, rather than the physical differences between a dog and a wolf.

When training these machines, it is important to understand what that machine is doing, Harris added.

“I think were we are in the evolution where the analyst can use that as a decision aid, maybe the machine looked at something from a different perspective that the analyst didn’t consider and that should open up that aperture of the analysts, but certainly there are some other areas where we’re expecting faster processing and faster speed,” he said.

Mark Pomerleau is a reporter for C4ISRNET, covering information warfare and cyberspace.

Share:
More In DoDIIS