When it comes to deciding the future of AI, it will be a question for the 117th Congress. That is the most concrete conclusion from the July 31, 2019, publication of the initial report from the National Security Commission on Artificial Intelligence. Sent to Congress per the legislative mandate that establishing the commission, the report is less a comprehensive look at the state of play of AI and more an outline of what, exactly, the commission plans to write about in the future.

“We submitted the initial report today, and there’s not a lot of substance in the report,” said NSCAI Vice Chair, and former undersecretary of defense, Bob Work in a call with media. “We are listening, in fact-finding mode, and have an unbelievably capable staff.”

The commission’s report, initially scheduled for Feb. 9, 2019, has had its legislative mandate extended through to March 2021. That will put it squarely at the feet of the incoming 117th Congress, rather than the original timing of dropping right around the November 2020 election. That shift in timing, thanks to language in the fiscal year 2020 NDAA, matches both the actual pace at which the commission assembled (it did not hold its first plenary session until March 2019), and will give it a greater window in which the findings can have an impact with whatever legislative and executive makeup is in office when it arrives.

The initial report’s focus on process over findings is a way to give the commission breathing room to thoughtfully consider the issues.

“One of the reasons we elected not to give any early recommendations in July of 2019 is we don’t want to have to change them based on the fast pace of technology,” said Work. “This is going to be a challenge for us, because it is moving very very fast.”

There are two main ways the commission is staying abreast of current technology. The first is by dividing the focus into four working groups, each with a more narrow area of focus.

The first working group, lead by Andrew Moore of Google (formerly Carnegie Mellon), is focused on “Maintaining U.S. Global Leadership in AI Research.” The second, led by Oracle CEO Safra Catz, is focused on maintaining that leadership but in AI National Security Applications. The third, led by José-Marie Griffiths of Dakota University, is focused broadly on preparing citizens for an AI future and more narrowly on the needs of a modern STEM-dependent workforce for government and the Department of Defense. The fourth working group, led by Jason Matheny, formerly of IARPA, is tasked with “international competitiveness and cooperation in AI.”

Besides the groups themselves, the composition of the commission leans heavily on people in the same sectors of the technology industry that are intimately intertwined with what defense might need to procure.

Asked if this might present a conflict of interest, with, say, people who have a financial stake in government contracts making recommendations to Congress, Work said that the process so far has been self-policing. The methods used include commissioners self-identifying their respective company interests when relevant, and also making recommendations by consensus.

“Eric [Schmidt, commission chairman and former Google CEO] and I are trying very hard to have all the commission recommendations be a consensus recommendation, preferably unanimous,” said Work, “that provides a measure so one person isn’t able to tilt the playing field.”

The commission is also careful to not replicate the work being done by other future-looking AI avenues, like the Defense Innovation Board, also chaired by Schmidt, and wants to make sure that the commission’s reports are a value-add. To that end, the commission is primarily focused on its interim report, scheduled for November 2019, and its final report in March 2021.

It’s possible that Congress may not want to wait that long to tap into the expertise and perspective assembled by the commission. Work said that, if asked for input by Congress, the commission would respond, though was careful to note that the legislative mandate for the commission itself expires in March 2021. Should Congress want to tap into the same well of knowledge, it would need to act specifically to extend the commission’s mandate.

In the meantime, the commission plans to continue its work in both classified and unclassified briefings. November will likely the first time the public gets a glimpse into what the commission is exploring with regards to AI, and not just how it is being explored.

“These guys are all at the cutting edge of technology and we think we can manage it, although it will be a challenge,” said Work, with a chuckle.

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In Artificial Intelligence