Human civilization is built on language, but at an even more basic level, it is built on trust. Knowing what others have promised to do, and then being able to rely on that promise, makes possible everything from raising a family to growing food to simply showing up at a job week after week. That trust is important all the time, and it is especially important in the high stakes of combat, where meaningful decisions happen in seconds and there might only be time for a quick sentence between needing to take action. With robots expected to be an increasingly common presence on the battlefield, DARPA launched an ambitious program: is it possible for robots to quickly earn the trust of humans?
Officially the “Competency-Aware Machine Learning,” DARPA’s stated aim is to “develop machine learning systems that continuously assess their own performance in time-critical, dynamic situations and communicate that information to human team-members in an easily understood format.”
Or, in plain language, DARPA wants a way for robots to figure out how they’re doing, quickly, and then let people know by talking to them. To get there requires a conflux of several technologies: sensors and status awareness, an ability to convert those readings into useful language, and a way to express that language.
Fortunately, humans are really good at understanding language, and treating machines as living beings capable of communicating emotion. Consider, for example, the curious case of the last message received from NASA’s Opportunity Rover. The message, sent on June 10, 2018, indicated that the robot was going to conserve battery power and try to ride out a sun-blocking dust storm. On February 13, 2019, NASA made a last attempt to contact the Opportunity Rover, to no avail. A journalist’s poetic interpretation of that last transmission as “My battery is low and it's getting dark” swept across the internet as though they were the robots own original last words. They weren’t, but the spread of the poetic interpretation is just the latest case of people responding to robots as capable of language, and connecting with them on an emotional level.
“If the machine can say, ‘I do well in these conditions, but I don’t have a lot of experience in those conditions,’ that will allow a better human-machine teaming,” said Jiangying Zhou, a program manager in DARPA’s Defense Sciences Office in a release. “The partner then can make a more informed choice.”
Machine-to-human language will come at the later end of the program. Before that, DARPA wants the machine learning program to tackle object recognition, robotic navigation, action planning, and decision making. These skills will then be “tested using realistic test vignettes with actual applications,” to see if the learning is as adaptive as it needs to be. After passing those trials, the machine will be judged on how well it can communicate its status using “machine-derived, human-understandable, competency statements”
Curiously, the Competency-Aware Machine Learning program will evaluate the accuracy of the machine’s ability to communicate its own competency, rather than looking at how humans perceive the machine’s ability to communicate competency.
The end result may never be anything as fanciful as a worn-down space robot signalling a poetic end to its long-running mission. But it might not be that far off. A damaged battlefield robot that can organically respond to a question from a human with “SNAFU” will have mastered at least one of the arts of war.
Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.