WASHINGTON — As autonomous technology continues to evolve, the Pentagon finds itself being pulled in two directions, enticed by the capabilities that autonomous systems could provide while also insistent it always be subservient to humans, and a set of human morals and mindsets.

That tension was on full display Aug. 25, when a new report from a key Pentagon advisory group called for an acceleration of autonomous systems within the US military at the same time the country's second highest ranking uniformed officer warned that there will need to be limits on how the technology is used in order to avoid the dreaded killer-robot scenario.  

Speaking at the Center for Strategic and International Studies, Gen. Paul Selva, vice chairman of the Joint Chiefs of Staff, laid out his concerns with the "Terminator Conundrum," the idea that a fully autonomous system could be created with the capability to make decisions about when and where to inflict violence.

Click here to download the Defense Science Board autonomy report.

While noting that technologists in the Pentagon believe that capability is still a decade away, Selva noted that 15 years ago he was told a digital rendering of the world would be impossible and never happen, before dryly telling the audience" "So I guess Google Earth is an impossibility."

He also threw his support behind the idea of a treaty or global convention against the creation of wholly autonomous systems that can operate without a man in the loop controlling it, saying: "I do think we need to examine the bodies of law and convention that might constrain anyone in the world from building that kind of a system. … I think we have to have something."

Selva, who over the last year has worked closely with Deputy Secretary of Defense Bob Work on issues of innovation and new technologies, also laid out something he sees as a key rule for autonomous capabilities: keeping human control over the final decision to use weaponry.

"One of the places where we spend a great deal of time is determining whether or not the tools we are developing absolve humans of the decision to inflict violence on the enemy. That is a fairly bright line that we’re not willing to cross," Selva said.

Instead, the DoD is looking at capabilities that can bolster human-led military operations. And according to a new report from the Defense Science Board (DSB), a group of experts from outside the Pentagon, the department needs to begin serious thinking about bringing autonomy into mission sets as quickly as possible.

The DSB report emphasizes the issue of "trust" with autonomous and semi-autonomous systems — from the war fighter, the commander and the civilian population.

"The decision for DoD to deploy autonomous systems must be based both on trust that they will perform effectively in their intended use and that such use will not result in high-regret, unintended consequences," the authors wrote. "Without such trust, autonomous systems will not be adopted except in extreme cases such as missions that cannot otherwise be performed.

"Further, inappropriate calibration of trust assessments — whether over-trust or under-trust — during design, development, or operations will lead to misapplication of these systems. It is therefore important for DoD to focus on critical trust issues and the assurance of appropriate levels of trust."

Among the many recommendations from the report is the creation of an executive committee "with the responsibility to oversee and ensure the development and fielding of autonomous systems," as well as tasking the various services to establish point people to resource and develop autonomous systems inside each branch of the military.

The report also recommended directing the undersecretary of defense for intelligence (USD(I)) to "raise the priority of collection and analysis of foreign autonomous systems" in order to better understand how potential adversaries are developing this technology.

In his speech, Selva acknowledged that while the Pentagon wants to stay on what it considers the right side of the man-machine control line, other nations may not feel as constrained.

"In the world of autonomy, as we look at what our competitors might do in that same space, the notion of a completely robotic system that can make a decision about whether or not to inflict harm on an adversary is here," Selva said. "It’s not terribly refined, it’s not terribly good, but it’s here.

"As we develop systems that incorporate things like artificial intelligence and autonomy, we have to be very careful that we don’t design them in a way where we create a situation where those system absolve humans of that decision," he added.

As part of its recommendations, the DSB suggested a series of demonstrations and experiments in order to prove out the capability of autonomous systems — and to get a sense of how they may be used against the US military in the future. (The authors hint they may have recomended funding larger programs, but did not given the current budget situation.) These test programs would run for two or three years with relatively small amounts of money, in some cases as low as $10 million.

The areas identified for those small tests include:

-- Autonomous agents to improve cyberattack indicators and warnings.

-- Onboard autonomy for sensing.

-- Time-critical intelligence from seized media.

-- Dynamic spectrum management for protection missions.

-- Unmanned undersea vehicles (UUV) to autonomously conduct sea mine countermeasure missions.

-- Automated cyber response.

-- Cascaded UUVs for offensive maritime mining.

-- Organic tactical unmanned aircraft to support ground forces.

-- Predictive logistics and adaptive planning.

-- Adaptive logistics for rapid deployment.

Many of the areas for study are focused more on autonomous data processing rather than the dreaded seek-and-destroy technology that opponents of autonomous system worry about. Forms of autonomous data processing are already in use in the commercial world, and could be an easy boost for a military that has thousands of hours of live video coming in from all over the world on a regular basis without enough analysts to track it.

"There is no need to solve the long-term AI [artificial intelligence] problem of general intelligence in order to build high-value applications that exploit limited-scope autonomous capabilities dedicated to specific purposes," the authors of the study noted.

The classic example of how autonomy could help with data processing would be if a Reaper drone was patrolling a house. Rather than have an analyst watch hundreds of hours of film, an autonomous system could quickly skim through the footage and only alert an analyst when something changes — a car arrives, a door opens or facial recognition software picks up a hit from a target.

In his speech, Selva himself noted the need for big data analytic tools to alleviate the data crunch on his human force, noting that he would need eight million new imagery analysts working 24 hours a day to be able to keep up with the projected growth in commercial satellite imagery within two decades.

"No country in the world has that capacity, so we have to find a different way," Selva noted.

Aaron Mehta was deputy editor and senior Pentagon correspondent for Defense News, covering policy, strategy and acquisition at the highest levels of the Defense Department and its international partners.

Share:
More In Budget