If the U.S. Army has its way, soldiers deployed on the battlefield will be shielded from cyberattacks without human involvement.
The Army’s Aberdeen Proving Ground is conducting research into how artificial intelligence can protect soldiers’ tactical networks and communications from cyberattacks, according to a Jan 14. announcement.
Among the areas of research are ways for machine learning to automatically detect known cyber vulnerabilities, spot previously unknown malware and respond to a cyberattack.
After the market research is submitted, the Army will use the submissions for informational and planning purposes only.
The Army’s hunt for AI research comes as the Pentagon has grown more interested in defending against cyberattacks that itself use machine learning. It is a future where machines will fight machines in cyberspace. That concern was evident in the service’s announcement.
“The cyber technology will secure automated network decisions and defend against adaptive autonomous cyberattacks at machine speed,” the Army wrote.
Evidence of the Army’s focus on AI was evident during the 2018 CyCon US conference in November.
The Army is interested in three primary categories of artificial intelligence attacks, Maj. Nathan Bastian, a researcher at the Army Cyber Institute said during the conference.
First, data poisoning is a method in which an attacker inserts malicious information into a data set. Because artificial intelligence relies on these data sets to make decisions, their manipulation blunts machine learning’s effectiveness, Bastian said.
Second, an attack on artificial intelligence can take place by changing the classification methods. For example, if a cat is incorrectly labeled as a dog, than artificial intelligence’s use is mitigated, Bastian said.
Third, an inference attack, or figuring out where machine learning’s boundaries lie, can be a weapon to defeat artificial intelligence. By discovering the limitations of the machine’s algorithm, Bastian said hackers can manipulate its effectiveness.
The Department of Defense has expanded its research into AI in recent months.
In October 2018, the service created its AI task force, which is located at Carnegie Mellon University. Projects are initiated by the Army Futures Command. The Pentagon also created its Joint AI Center in the summer of 2018.
At the CyCon conference, Brig. Gen. Matthew Easley, head of the Army’s new AI task force, said that the Pentagon needs to integrate commercial AI products.
“The commercial sector is driving current breakthroughs in applications of AI,” Easley said.
Easley laid out four principles for what the Army sees as a successful AI project. They include clean data, an articulate use case, talent and technology.
However, Easley cautioned about the boundaries of machine learning during the event. Limitations of AI can include a sample size that is too small and limited ability to use the machine learning in the field. He also said that AI struggles to detect zero-day attacks, which are programming bugs.
“AI is not all that easy,” Easley said. “Realizing the potential of AI will require major transformation,” for the Pentagon.
Justin Lynch is the Associate Editor at Fifth Domain. He has written for the New Yorker, the Associated Press, Foreign Policy, the Atlantic, and others. Follow him on Twitter @just1nlynch.