The earliest weapons were dual-use technologies: rocks chipped into sharp edges and bound to arrows or spears or clubs that proved as useful for hunting as they did fighting.
Modern life is millennia removed from proto-ethical debates over the dangers of collaborating on hunting technology with people who may someday turn it to violence, but dual-use tools are at the center of a major inter- and intra-national debate. The U.S.-China Economic and Security Review Commission held a hearing June 7 about China and technology, specifically the ways in which developments in the civilian sector could be exploited and weaponized by China’s military.
“China has been hyped as an AI superpower poised to overtake the U.S. in the strategic technology domain of AI,” said Jeffrey Ding, China lead for the Center for the Governance of AI, Future of Humanity Institute, University of Oxford; D.Phil. Candidate, University of Oxford.
“Much of the research supporting this claim suffers from the ‘AI abstraction problem,’” Ding continued, “the concept of AI, which encompasses anything from fuzzy mathematics to drone swarms, becomes so slippery that it is no longer analytically coherent or useful.”
Into the fuzziness of that debate around AI, the commissioners probed and the witnesses testified, occasionally going into the weeds of specific data but largely moving back to questions of intent and capability. The big questions underlying the hearing were these: can China leapfrog development of AI to create useful military tools, and can the United States marshal an industrial policy to maintain a lead in AI without the same kind of integrated state apparatus that China has? Also, what will China do with AI once it develops it?
The degree to which China can expand on and leapfrog U.S. AI development hinges on everything from the respective countries immigration policies, a willingness of multinational corporations to share research, direct and passive funding, and the extent to which things like counter-intelligence operations can yield results or chill collaboration. It also depends on what purpose, exactly, the AI is being trained to do.
“AI appeals to the [People’s Liberation Army] in part because it has fewer legacy systems,” said Elsa Kania, adjunct senior fellow, Technology and National Security Program at the Center for a New American Security and research fellow at CSET. Building systems for the first time with AI in mind is a different and distinct proposition from adapting existing systems to draw on AI tools.
Looking for a hard metric in the recent past, Ding cited an unpublished manuscript by Jon Schmid, a doctoral candidate at Georgia Tech, that looked at military patents to see what was being developed.
“It’s not a perfect indicator because a lot of military hardware isn’t going to be patented open source,” said Ding, “but a crucial caveat I’ll add to that is a lot of these advanced military systems source components that are found in patents, so it’s a good indicator.”
Looking specifically at the abstracts of patents for military hardware or hardware components that cited autonomous or unmanned, the research found that from 2003 to 2015 the United States had a lead in cumulative military patents seven times that of China.
“We can think of AI as next wave of software improvements,” suggested Helen Toner, director of strategy at Georgetown University’s Center for Security and Emerging Technology. Without a clear inflection point for when AI transitions from capable software to a unique capability, many of the changes will in nature and implementation resemble software upgrades.
AI can be trickier to quantify, because while end-products may make note of features, like autonomy, that are enabled by AI, AI itself is a process done to data and doesn’t always transfer easily or at all to other applications. Also, processes are limited to being only as good at the quality of data that goes into them.
“It’s not quite as simple as ‘data is the new oil,’ but the value of data is really application dependent, so we have seen strength in particular applications where massive data is an advantage,” said Kania. “The nature of AI is a general purpose technology that is enhancing and augmenting weapon systems across a vast array of weapon systems, drive dynamics hard to evaluate, make safety and stability.”
Ultimately, what matters most in AI is less about any specific development, and more about how that AI facilities and enables existing choices people were already prepared to make. Multiple members of the U.S.-China Economic and Security Review Commission asked specifically about the role of AI in China’s repressive policy towards the Uighur people of Xinjiang. That specific concern was cited by Sens. Martin Heinrich, D-N.M., and Rob Portman, R-Ohio, in introducing a bill to enhance U.S. funding of AI research.
“The reason Xinjiang is happening is not AI, it is that the government of China is willing to make it happen,” testified Toner, “and therefore our response, if we respond, should condemn or otherwise sanction that action, of which tech is only a small part.”
The extent to which tech is especially responsible for state-led repression was contested by the panel, with Kania testifying that surveillance technologies are at the heart of repression at Xinjiang.
It is hard for the technology sector to, on its own or through U.S. government encouragement, try to change the behavior of a government. There was somewhat more consensus on policy suggestions for companies and government to end relationships building AI tools used for repression in the name of internal security.
Military implications of AI competition are likely downstream from these tangible concerns over AI built for other purposes. An ecosystem built on top of open-source tools and massive collections of data could be rebuilt if major participants stopped collaborating with each other, but it would likely take a national initiative to coordinate such a break. In the meantime, an active Cold War in AI will remain a near future possibility, rather than an immediate course of action.