Is $10 million and 22 months enough to shape the future of artificial intelligence?

Probably not, but inside the fiscal 2019 national defense policy bill is a modest sum set aside for the creation and operations of a new National Security Commission for Artificial Intelligence. And in a small way, that group will try. The commission’s full membership, announced Jan. 18, includes 15 people across the technology and defense sectors. Led by Eric Schmidt, formerly of Google and now a technical adviser to Google parent company Alphabet, the commission is co-chaired by Robert Work. former undersecretary of defense who is presently at the Center for New American Security.

The group is situated as independent within the executive branch, and its scope is broad.

The commission is to look at the competitiveness of the United States in artificial intelligence, how the US can maintain a technological advantage in AI, keep an eye on foreign developments and investments in AI, especially as related to national security. In addition, the authorization for the commission tasks it with considering means to stimulate investment in AI research and AI workforce development. The commission is expected to consider the risks of military uses of AI by the United States or others, and the ethics related to AI and machine learning as applied to defense. Finally, it is to look at how to establish data standards across the national security space, and to consider how the evolving technology can be managed.

All of this has been discussed in some form in the national security community for months, or years, but now, a formal commission will help lay out a blue print.

That is several tall orders, all of which will lead to at least three reports. The first report is set by law to be delivered no later than February 2019, with annual reports to follow in August of 2019 and 2020. The commission is set to wrap up its work by October 2020.

Inside the authorization is a definition of artificial intelligence to for the commission to work from. Or, well, five definitions:

  • Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.
  • An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.
  • An artificial system designed to think or act like a human, including cognitive architectures and neural networks.
  • A set of techniques, including machine learning that is designed to approximate a cognitive task.
  • An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision-making, and acting.

Who will be the people tasked with navigating AI and the national security space? Mostly the people already developing and buying the technologies that make up the modern AI sector.

Besides Schmidt, the list includes several prominent players from the software and AI industries including Oracle co-CEO Safra Catz, Director of Microsoft Research Eric Horvitz, CEO of Amazon Web Services Andy Jassy, and Head of Google Cloud AI Andrew Moore. After 2018’s internal protests in Google, Microsoft, and Amazon over the tech sector’s involvement in Pentagon contracts, especially at Google, one might expect to see some skepticism of AI use in national security from Silicon Valley leadership. Instead, Google, which responded to employee pressure by declining to renew its Project Maven contract, is functionally represented twice, by Moore and functionally by Schmidt.

Academia is also present on the commission, with a seat held by Dakota State University president. Jose-Marie Griffiths. CEO Ken Ford will represent Florida Institute for Human & Machine Cognition, which is tied to Florida’s State University program. Caltech and NASA will be represented on the commission by the supervisor of Jet Propulsion Lab’s AI group, Steve Chien.

Intelligence sector will be present at the table in the form of In-Q-Tel CEO Christ Darby and former Director of Intelligence Advanced Research Projects Activity Jason Matheny.

Rounding out the commission is William Mark, the director of the information and computing sciences division at SRI, a pair of consultants: Katrina McFarland of Cypress International and Gilman Louie of Alsop Louie Partners. Finally, Civil society groups are represented by Open Society Foundation fellow Mignon Clyburn.

Balancing the security risks, military potential, ethical considerations, and workforce demands of the new and growing sector of machine cognition is a daunting task. Finding a way to bend the federal government to its conclusions will be tricky in any political climate, though perhaps especially so in the present moment, when workers in the technological sector are vocal about fears of the abuse of AI and the government struggles to clearly articulate technology strategies.

The composition of the commission suggests that whatever conclusions are reached by the commission will be agreeable to the existing technology sector, amenable to the intelligence services, and at least workable by academia. Still, the proof is in the doing, and anyone interested in how the AI sector thinks the federal government should think about AI for national security should look forward to the commission’s initial report.

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

More In C2/Comms