Dignitaries from across the world are gathering in Geneva, Switzerland, from Aug. 27 through Aug. 31 to debate a profoundly modern question: How, in an age of autonomous machines, should nations make sure that the decision to kill in battle remains in human hands?

The debates are ongoing, and whatever outcome is reached will likely reflect a mixed consensus of humanitarian interest, legal compromise and state power. But how do the American people feel about artificial intelligence-driven weapons?

That’s a question answered, in part, by a new survey from Brookings Institute. Run through Google Surveys, the Brookings study sought answers from 2,000 adult internet users between Aug. 19 and Aug. 21, 2018, with Brookings saying the results were weighted using gender, age and region to match the demographics of the national internet population as estimated by the Census Bureau’s Current Population Survey.

The sample was somewhat divergent from population, skewing both more male and older than the public at large, though it roughly matched regional distribution. That aside, the findings provide some initial insight into how people really feel about artificial intelligence used for war.

Americans, asked the question in isolation, are mostly ambivalent about developing AI technologies for warfare, with 27 percent firmly opposed, 18 percent strongly in favor, and 23 percent somewhere between. (Grouping “possibly no” with “definitely no,” we get 38 percent opposed and 30 percent in favor.) That leaves 32 percent unsure, reflecting the status of AI in war a mostly open question that people answer for themselves only once they learn about it.

That’s partly what happened with Google in spring 2018, as employees within the company learned about and then objected to Project Maven, which trained AI to identify objects in drone footage. The AI built for Project Maven is clearly intended for asymmetric war, where the United States already enjoys a conventional military advantage over its insurgent foes.

What if the question, instead, was about developing AI for warfare in light of other near-peer competitors working on the same? Asked “Should AI technologies be developed for warfare if we know adversaries already are developing them?” the balance shifted, with 25 percent of respondents firmly or possibly opposed, 45 percent of respondents possibly or firmly in favor, and 30 percent of people still undecided.

For companies interested in developing and selling AI-enabled weapons to military customers, framing the weapon as just one of many in the works by many nations could mitigate some qualms, but there’s a limit to this. A majority of respondents said that it was important for human values to guide AI developments, that companies should hire ethicists for advice on developing major software, and that companies should develop a code of AI ethics to guide their actions.

These were hardly the only controlling mechanisms respondents supported. A majority of respondents were also in favor of AI ethics review boards for companies, that AI code development should leave a trail so it can be audited and traced back to human coders, AI development should include ethics training for the people making the AI, and companies should have a “means for mediation when AI solutions inflict harm or damages on people.”

Which means that any business working in both the AI and weapons spaces, or even the AI and sensors spaces, or the AI and defense writ large spaces, should take public skepticism of the technology into account when designing and developing the machines. While 50 percent of the public didn’t have a clear preference between legislators, private business leaders, judges, coders and the general public for designing and deploying AI, the answers on ethics obligations suggest some strong public concern in making sure companies get this right.

This is, after all, ultimately about entrusting machines with battlefield decisions, from target identification possibly all the way through making the call to shoot and kill. No one wants the machines to get it wrong, and people are skeptical enough that machines can get it right that there’s a good chance the international community decides machines don’t get to make that call at all.

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In Unmanned