Under initiatives such as the so-called third offset strategy, which on a basic level imagines humans and machines teaming together to overcome the parity competitors have reached with American capabilities, artificial intelligence will be a critical component.
For humans, however, learning to trust machines is a real challenge. "A lot of AI today is a black box, you have this neural net that you put in the inputs, it spits out an answer and 90 percent of the time it's right. But that last 10 percent, sometimes it really screws up," Defense Advanced Research Projects Agency acting Director Steven Walker said Wednesday at the Defensive Cyber Operations Symposium in Baltimore, Maryland.
Walker's predecessor, Arati Prabhakar, last year at the Atlantic Council displayed a picture of a baby holding a toothbrush that a machine identified as a baseball bat. This could have mortal implications if a drone feed misidentifies a rake for an AK-47.
"The best of these systems are statistically better than humans at identifying images. The problem is that when they’re wrong, they are wrong in ways that no human would ever be wrong and I think this is a critically important caution about where and how we would use this generation of artificial intelligence," Prabhakar said.
The goal of DARPA's Explainable AI program, Walker said, is to utilize psychologists and others to understand what would make a human comfortable in working with a machine, and what must a human know in regard to how a machine develops an answer so it’s not just answer and probability but rather the methodology laid out.
Walker discussed another program in the works in regard to AI that looks at how a human can trust a machine-learning algorithm and a machine based on AI. What verification process must take place surrounding how a machine develops an answer for a human to trust the machine?
These are critical questions that need to be answered for concepts such as the third offset and man-machine teaming to be realized.