Artificially intelligent systems must be able to explain themselves to operators if they are to be trusted, according to an expert from the Defense Advanced Research Agency, who voiced concern that methods used by current AI systems are often masked by mysterious algorithms.

“A lot of the machine learning algorithms we’re using today, I would tell you ‘good luck,” Fred Kennedy, the director of DARPA’s Tactical Technology Office during a panel at Navy League’s Sea-Air-Space on April 10. “We have no idea why they know the difference between a cat and a baboon.”

“If you start diving down into the neural net that’s controlling it,” Kennedy continued, “you quickly discover that the features these algorithms are picking out have very little to do with how humans identify things.”

Kennedy’s comments were in response to Deputy Assistant Secretary of the Navy for Unmanned Systems Frank Kelley, who described the leap of faith operators must make when dealing with artificially intelligent systems.

“You’re throwing a master switch on and just praying to God that [Naval Research Laboratory] and John’s Hopkins knew what the hell that they were doing,” Kelley said of the process.

The key to building trust, according to Kennedy, lies with the machines.

“The system has to tell us what it’s thinking,” Dr. Kennedy said. “That’s where the trust gets built. That’s how we start to use and understand them.”

DARPA’s Explainable Artificial Intelligence program seeks to teach AI how to do just that. The program envisions systems that will have the ability to explain the rationale behind their decisions, characterize their strengths and weaknesses, and describe how they will behave in the future. Such capabilities are designed to improve teamwork between man and machine by encouraging warfighters to trust artificially intelligent systems.

“It’s always going to be about human-unmanned teaming,” said Kennedy. “There is no doubt about that.”

Share:
More In Home