Artificial Intelligence is an inherently opaque process. Creating machines that can arrive at new conclusions means setting a process in motion more than carefully orchestrating each step along the path. So, too, it appears are the operations of the National Security Commission on Artificial Intelligence, which held its third plenary session July 11, 2019, in Cupertino, California. In a Department of Defense news release, the Commission shared merely that it listened to classified briefings about AI.

Created by the National Defense Authorization Act in 2018, the Commission is explicitly tasked with reviewing “advances in artificial intelligence, related machine learning developments, and associated technologies,” for the express purpose of addressing “the national and economic security needs of the United States, including economic risk, and any other associated issues.” The form this review will take is annual reports to the president and Congress, made publicly available with the possible exception of a classified annex.

“The session advanced the Commission's understanding of the nature and challenges facing the United States,” said Commissioner Andy Jassy, CEO of Amazon Web Services, in the release. The session included “classified briefings on counterintelligence threats and challenges” and followed on from a self-reported more than 100 classified and unclassified briefings to the Commission itself and its working groups.

Besides discussing threats behind closed doors, the commissioners received possibly classified briefings on “opportunities to advance U.S. leadership in artificial intelligence,” and workshop reports on how the United States can maintain leadership in AI research, national security uses of AI, how to prepare citizens for AI, and maintaining a competitive advantage.

Absent from the release is any information about the specifics of the reports, assessments, working group evaluations or briefings. Companies or members of the public interested in learning how the Commission is studying AI are left only with the knowledge that appointed people met to discuss these very topics, did so, and are not yet releasing any information about their recommendations.

The Commission’s mandate specifies annual reports published no later than a year after the NDAA was passed, with one exception: the Commission’s first public report was due Feb. 9, 2019. Shortly after missing that deadline, the Electronic Privacy Information Center (EPIC) filed a Freedom of Information Act request for the report. In March and April, EPIC again noted that its request for a public copy of the Commission’s first report had not yet been honored.

In the meantime, any companies hoping to build or adapt AI to meet stated national security needs will have to find other channels to discover what the Commission thinks.

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In Artificial Intelligence