WASHINGTON — That artificial intelligence has tremendous implications for our future hasn’t allowed too many House of Representative members to inconvenience themselves with it in the present, if the Dec. 11 turnout at the House Armed Services Subcommittee on Emerging Threats and Capabilities is any indication. Ranking member James Langevin, D-RI, put forth the idea that “data is the new oil,” and thus began a weird discussion over all the ways in which data is important to national security, but unlike oil.

Speaking before the subcommittee were Lisa Porter, deputy undersecretary of defense for research and engineering, and Dana Deasy, chief information officer for the Department of Defense. Central to the conversation over what, exactly, the Pentagon is doing with artificial intelligence were three big themes: making the data useful, making the data verifiable and making the data secure.

Money for something, algorithms for DoD

Key to turning AI from nebulous concept and flashy buzzword into a useful way to process information is funding, and while the overall budget for AI within the Pentagon is still unclear, there are some very real numbers for funding, of which Dr. Porter was happy to remind the subcommittee.

Over the course of the next five years, DARPA is set to spend a total of $2 billion on AI through a variety of initiatives. These include quick turnaround projects through the Artificial Intelligence Exploration program, which wants to get researchers started exploring a concept within 90 days of announcement and have some answer as to whether that AI concept is viable 18 months out from funding. This is broadly part of DARPA’s AI Next Campaign, which is also looking for long-term benefits and concepts for AI that are beyond what commercial companies are looking at for their own purposes.

Between the immediate term and the long-term funding sits the work of DIUx, which wants to see if there are tools and techniques already viable in the commercial world that could be adopted, off-the-shelf or with modification, into the operation of the Pentagon. JAIC, the Joint Artificial Intelligence Center, can serve to mediate those flows, looking for projects coming online in the near term and trimming redundancy where possible.

Data, not lore

What is especially compelling about machine learning is the ways it reaches useful conclusions while diverging from simple and strict deterministic algorithms. This makes an AI algorithm something of a black box, where it is fed data and then produces a processes result, which is hopefully something valuable downstream.

“Another well-known limitation of current systems is that they cannot explain what they do, making them hard to trust,” said Porter. “In order to address AI’s trust issue, DARPA’s Explainable AI program aims to create machine learning techniques that produce more explainable models while maintaining a high degree of performance.”

The opacity of AI involves two interrelated risks. The first is that people who need to use the data won’t trust it when it’s working correctly, and the second is that the opacity allows an adversary to mingle with the processing on the inside and produce a potentially dangerous result. This is especially true in the missions where AI interpretation can be life or death, which fits both the Pentagon and some commercial applications.

“People can spoof AI systems pretty easily,” said Porter. “One notable example is in the self-driving car community, [where] a team in Berkley put tape on stop signs and AI thought a stop sign was a 45 mph speed limit sign. There are countless examples of this now.”

Most of the actual fighting will be done by small robots

Beyond failures in funding and errors in processing, the other major threat to the United States' ability to reliably use AI is losing an edge through the theft of data. To protect AI research from another nation-state’s thieves and saboteurs, Deasy says the JAIC is turning to a novel solution: “We’re going to apply AI to help us solve this problem.”

Pointing to some ongoing national and component mission initiatives, the JAIC is working with cyber command to see how to apply AI and pattern recognition to detect intrusions and anomalies in sensitive networks, and quickly assess if there’s been a change from normal behavior.

“If you think of how hackers normally try to penetrate, they’ll go to a point of least resistance, and once they’re in they’ll go laterally, and then what you’re looking for is exfiltration,” said Deasy. “We believe AI will be a very good machine use case for how we look at data patterns and signatures across our network to help ensure we don’t have exfiltration occurring to folks like the Chinese.”

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In C2/Comms