WASHINGTON — The U.S. intelligence community released artificial intelligence principles and an ethics framework on Thursday to ensure that intel organizations are safely and legally developing AI systems as the technology quickly evolves.

The long-awaited principles and framework, released in two separate documents by the Office of the Director of National Intelligence, are meant to outline the intelligence community’s broad values and guidance for the ethical development of AI. They provided six principles:

  • Respect the law and act with integrity.
  • Be transparent and accountable.
  • Be objective and equitable.
  • Focus on human-centered development and use.
  • Ensure it is secure and resilient.
  • Inform decisions via science and technology.

The accompanying six-page framework, with 10 stated objectives, is meant to put “meat on the bones” of the stated principles, Ben Huebner, chief of ODNI’s Office of Civil Liberties, Privacy, and Transparency, said Thursday on a call with reporters. Huebner said there are a series of questions that practitioners within the 17 intelligence agencies should consider when developing AI.

“The framework isn’t a checklist. It certainly doesn’t answer every question. It’s a tool, and it’s a tool that provides the intelligence community with a consistent approach” to artificial intelligence, Huebner said.

The intelligence community is a massive conglomerate of agencies, each tasked with a specific intelligence mission, making it difficult to verify the implementation of these ethics considerations.

To ease oversight challenges, a critical piece of the framework calls on AI users in the intel community to adequately document information about the AI technology under development. That would include explanations on the AI’s intended use, its design, its limitations, related data sets and changes to its algorithm over time.

Asked how ODNI will verify that AI projects at intelligence agencies under its purview are following the framework and principles, Huebner pointed to the documentation guidance that could then be accessible by legal counsels, inspectors general, and privacy and civil liberties officers.

“One of the things I think you see throughout particularly the ethics framework is the incorporation of best practices to allow the folks [in] the oversight community ... the tools they’ll need to conduct that oversight,” Huebner said.

The framework is just the first iteration of ODNI’s ethics framework. Huebner told reporters to expect further iterations of the framework as the intel community learns more about the use cases for AI, and as the technology itself matures.

Dean Souleles, who runs ODNI’s Augmenting Intelligence through Machines Innovation Hub, told reporters that within ODNI’s working groups, they are “actively” developing different standards for future use cases.

“It is too early to define a long list of dos and don’ts,” Souleles said. “We need to understand how this technology works. We need to spend our time under the framework and guidelines that we’re putting out to make sure we are staying within the guidelines. But this is a very fast-moving train with this technology.”

A major concern with artificial intelligence, no matter who is developing it, is bias in algorithms. The framework tells practitioners to take steps to discover undesired biases that may enter algorithms throughout the life cycle of an AI program.

“The important thing for intelligence analysts is to understand the sources of the data that we have, inherent biases in those data, and then to be able to make their conclusions based on the understanding of that,” Souleles said. “And that is not substantially different from core mission of intelligence. We always deal with uncertainty.”

With the intelligence community charged with providing objective intelligence to policymakers to inform decision, its agencies must ensure that any AI systems used for intelligence collection are accurate. To ensure the intelligence collected and analyzed by artificial intelligence is accurate, humans must also be able to understand how the algorithms informed conclusions.

“No one has a stronger case than the IC that AI needs to produce results for policymakers outside the IC,” Huebner said. “If a member of the Cabinet or any senior policymaker turns to their intelligence briefer and says: ‘How do we know that?' We never have the option of saying: ‘We don’t really know, that’s kind of what the algorithm is telling us.' That’s inconsistent with what intelligence is.”

Humans will also remain a critical part of the intelligence collection process, the framework stated, based on “assessed risk.” In addition, Souleles noted, humans will continue to be a cornerstone to intelligence reports because AI can count enemy aircraft on a runway but won’t be able to answer why the number is higher or lower than a previous day.

Huebner also said ODNI will solicit feedback from the public. The intelligence community will look for AI use cases to talk about publicly when able, he said, adding that voice-to-text technology is one current area of work.

Souleles said his office is working to solve broad issues also faced by the private sector using machines, such as identifying actors who want to cause harm, countering foreign influence operations, and securing both the intelligence community and U.S. networks.

“There’s many areas that I think we’re going to be able to talk about going forward where there’s overlap that does not expose our classified sources and methods because many, many, many of these things are very common problems,” Souleles said.

Andrew Eversden covers all things defense technology for C4ISRNET. He previously reported on federal IT and cybersecurity for Federal Times and Fifth Domain, and worked as a congressional reporting fellow for the Texas Tribune. He was also a Washington intern for the Durango Herald. Andrew is a graduate of American University.

Share:
More In Artificial Intelligence