Deepfakes are a national security issue, said Lt. Gen. Jack Shanahan, director of the Pentagon’s Joint Artificial Intelligence Center, and the Department of Defense needs to invest heavily in technology that can counter it.

Deepfakes are videos where one person’s face is superficially imposed onto another person’s face to make it look like they said or did things they did not. As deep fake technology becomes more sophisticated and proliferated, the task of verifying that the video is authentic and unaltered becomes endlessly more difficult.

During a panel at an AI conference hosted by John Hopkins Applied Physics Laboratory Aug. 29, Shanahan noted that while deepfakes were a particular concern, they were simply another step in similar disinformation efforts “to cause friction and chaos” had been tried previously by adversaries.

“We saw strong indications of how this could play out in the 2016 election, and we have every expectation that — if left unchecked — it will happen to us again,” said Shanahan. “As a department, at least speaking for the Defense Department, we’re saying it’s a national security problem as well. We have to invest a lot in it. A lot of commercial companies are doing these everyday. The level of sophistication seems to be exponential.”

One way the Department of Defense is trying to tackle deep fakes is through DARPA’s Media Forensics (MediFor) program.

“It’s a completely unclassified program on this very question — the question of deepfakes,” said Shanahan. “It’s coming up with ways to tag and call out [disinformation regardless of medium].”

Under MediFor, DARPA researchers are developing technologies that will be able to automatically assess whether images or video have been altered. The goal of the program is an end-to-end media forensics platform capable of detecting manipulations and detail how the manipulations were done.

Shanahan said that he had met with DARPA officials to talk about the program Aug. 28 and came away pleased with their efforts.

Commercial technology can also play a role in fighting deepfakes, primarily by authenticating and verifying data.

For instance, Max Tegmark, an AI expert at MIT and president of the Future of Life Institute, noted that blockchain technology could be used for verification so that viewers could check to make sure videos had not been altered from their original format. And investor Katherine Boyle added that social media companies should be pushing to verify media on their platforms.

One part of the solution will be making an effort to convince adversaries to stop engaging in these misinformation campaigns, said Bob Work, former deputy secretary of defense and co-chair of the National Security Commission on AI.

“This is a really big problem for the United States,” said Work. “Democracies are especially vulnerable to them because we are so open. The attacks have occurred. The attacks are continuing. And the attacks will occur until we can figure out some way to either stop them altogether, which I think is going to be difficult, or come to an agreement with the other great powers that doing counter-value targeting against our population is kind of off limits.”

Nathan Strout covers space, unmanned and intelligence systems for C4ISRNET.

Share:
More In Information Warfare