Fourteen years ago, the future of robotic warfare ground to a halt in the desert outside Barstow, California. During DARPA’s Grand Challenge, which sought to automate the task of driving long distances across rugged terrain, a converted Humvee ended its journey with a thud just 7.5 miles from the starting line, far shy of the lofty goal of Primm, Nevada, which remained 142 miles away. If there is a prologue to the modern world of ground-based autonomous vehicles, it is here in this simple failure in the desert.

However, 18 months of iteration later, multiple teams completed a 132-mile circuit, and then an urban challenge, and from there the work on self-navigating vehicles moved from an experimental frontier of military spending to a driving concern of tech industry. Today, tech giants such as Google and Apple have created (and in Google’s case, even spun off) self-driving car companies.

It’s a foundational story, but an incomplete one: driving through an empty desert without humans on board is the easy challenge. Making a machine that can do it under realistic conditions in war is hard, and it’s the kind of hard that must be settled first, before any theorizing about the future of war and visions of autonomous machines that actually perform the onerous work of fighting wars.

“The successes of machine learning of the decade, which has everybody’s so excited about the new wave of artificial intelligence, are based on availability of enormous good clean, nice data,” said Alexander Kott, chief scientist of the Army Research Laboratory.

“You have 1 million pictures of cats and you have 1 million pictures of dogs and then your deep learning algorithm can go and learn how to differentiate cats from dogs. That is not going to happen on the battlefield.”

A roadmap to success

One of Kott’s, and ARL’s, interests is in what he terms “mobile intelligent entities” moving across the battlefield. The term is platform agnostic — it can apply to quadcopters and tracked vehicles — and it can also work for more science-fictional legged and limbed machines that walk and crawl into place. To better understand this idea, think of how the machines operate: on their own, evaluating the environment around them and performing tasks in collaboration with humans, rather than being directly controlled by them.

“If you take a piece of terrain and you show it to an infantry company commander and say, ‘The guy is on that hilltop over there, you must maneuver in order to take that away,’ there are only so many ways that you can maneuver on the terrain,” said Tony Cerri, who this year retired from working on simulations and data for the Army’s Training and Doctrine Command.

“A tactical genius was probably going to see something that others won’t. However, it’s gonna take them a minute or so, whereas a machine could figure it out in milliseconds.”

The first generation of mobile intelligent entities will broadly fall into two categories: those that perform intelligence, surveillance and reconnaissance missions, and those that execute resupply. These are essential tasks, but a step removed from the automated fights of droid armies that usually comes to mind when people picture battlefield robotics. This is partly because it’s easier to learn to walk and crawl than it is to learn to fight. But it’s also because learning to walk and crawl are, themselves, truly difficult tasks, and ones worth mastering autonomously in their own right.

“As long as you are flying, you are dealing with a medium in which in which mobility is relatively easy,” Kott said.

“It is much harder when we talk about ground crawlers, especially in extremely broken terrain such as urban terrain. That remains wide open research area. This is not necessarily in near-term. This is not a space for near-term successes.”

Long past are the days where designing for maneuver means principally designing for formations in open swathes of desert or the wide passes between hills in European plains. The army of the future, like the army of the present, will fight in cities, and they will want machines that can operate in all the three dimensionalities of that space without needing step-by-step human supervision.

“In some ways teleoperation is also a research problem. We know how to teleoperate vehicles in a fairly benign environment and even that is very labor consuming,” Kott said.

“If you go into more complicated environments, for example, tunnels and holes in the urban rubble, teleoperation becomes exceedingly difficult. The amount of information that the soldier who operates that vehicle needs to actually do teleoperation at any reasonable speed is huge.”

Far from the ease of massive, clean, online datasets, or even the relatively clean data of cars collecting information on city streets, the Army will have to collect data it needs to teach robots to maneuver each of these obstacles: tunnels, manholes, urban rubble, hills and forests, craters and anything else someone might reasonably expect on a battlefield.

“The problem of retraining and relearning changing environments continues to be a very difficult problem which doesn’t really have a particularly good solution. So, in fact, no reliable solutions,” Kott said.

“A good deal of it remains a matter of research on how to build machine intelligence for this kind of environment. I will say it is unreasonable to say that we have answers to these questions.”

Autonomy here is best thought of as meeting two distinct goals: saving the labor of a human doing the task either while physically present or remotely, and enabling vehicles to work in denied or difficult environments with low bandwidth. Sending the data back that is needed for a human to pilot, say, a remote-control ground vehicle is tricky in permissive environments, virtually impossible in dense cities and especially unlikely if the robot is operating underground. Collecting useful information for a remote operator means adding more sensors than the vehicle needs, which again increases the needed bandwidth and makes communication difficult.

If, instead, the robots can operate largely on their own — whether it is a quadcopter autonomously flying around a city block or a futuristic crawling machine maneuvering through sewers in order to deliver supplies to troops pinned down in a difficult-to-reach place — then the machines will need to do far less to keep humans informed. Maybe it’s a low-bandwidth broadcast of a GPS location, or maybe just a one-sentence generated update relayed to a tablet indicating progress. Developing to both the limits and strengths of autonomous machines means devising new ways for the humans to collaborate with the robots working alongside them.

Team all that you can team

To make that collaboration work, ARL is studying “human agent teaming,” which borrows lessons from human-machine interface work but is much more about the relationships between entities than interpretation of one by the other.

“It’s not just a machine and because it’s a capable of some actions and decisions on its own, it becomes much more difficult for the human to deal with it,” Kott said.

“Teaming means that the agent needs to understand and that human needs to understand the agent.”

To that end, the key thus far has been transparency. If a human knows that the machine is communicating with it, even if it doesn’t understand perfectly what the machine is doing, that the machine is communicating is, itself, a way to build trust between the human and the robot. This goes both ways: for the machines to work with humans, it helps if they can understand and anticipate how the humans are going to act.

“We are also looking at how machines can learn about what humans do. We’re doing research on [electroencephalography] signals. By analyzing EEG signals, machines are getting better and better at figuring out what the human is doing — is looking at, for example — or what kind of several possible tasks that human is about to undertake,” Kott said.

“We don’t expect that we’ll put the EEG helmet on the soldier in the battlefield, but it’s a step in the direction of artificial agents being able to understand something about humans and anticipate what humans might want to do.”

Consider the earlier example of the resupply robot crawling through tunnels. For both the humans waiting on the supplies and the humans who sent it, the way the robot gets into place is less important than it getting there at all. If that robot is able to transmit directions it’s traveling and just simple info about what it is doing to get where it needs to be, the humans waiting for it can operate with some degree of trust, even if they’re skeptical of the route.

“All this is very helpful to the soldier to feel more confident that this agent is not a rogue agent. It’s not off doing something weird and entirely unpredictable,” Kott said.

“That turns out to be very helpful.”

Getting to that future means studying what we can now, with soldiers practicing teaming in virtual environments, and it means engineering those legged robots to overcome limitations on power, size and noise to get to a good-enough design. Autonomy is a science of iteration until it becomes a science of observation. The machines that will crawl and fly and drive supplies and scout future missions are likely to find themselves in far less permissive environments than the present, and the way through a contested electromagnetic spectrum is autonomy.

Even if it means a few false starts in the desert to get there.

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In Unmanned