When it comes to plotting the future of artificial intelligence, the military has a metaphor problem. Not a metaphorical problem, but a literal one based on the language used to describe both the style and the structure of the AI threat from abroad.

The problem, narrowly put, is an over-reliance on the board game “Go” as a metaphor for China’s approach to AI.

The board game Go, which was first played in ancient China somewhere at least 2500 years ago, is about positioning identical pieces on a vast board, with the aim of supporting allies and capturing rivals. Just as chess can be seen as a distillation of combined arms on the battlefield, Go’s strength is in how it simulates a longer campaign over territory.

Also, like chess, Go has become a popular way to demonstrate the strength of AI..

The Google-funded AlphaGo project beat professional human players for the first time without handicap in 2015, and beat a world-class champion 4-1 in a five game match in 2016. That AlphaGo took longer to create than the chess-playing Deep Blue speaks mostly to the complexity of possible board states in the relative games; that Go has 361 spaces while chess has 64 is no small factor in this.

For AI researchers, building machines to match games with fixed pieces in a fixed space is a useful way to demonstrate learning in constrained problem sets. But there is little in the overall study of the games that informs strategic thinking in anything more than just a rudimentary level, and that’s where the problem with the metaphor could lead to bad policy.

At the 2019 Association of the United States Army symposium on AI and Autonomy in Detroit, multiple speakers on Nov. 20 referenced Go as a way to understand the actions of China, especially in light of strategic competition on the international stage. Acting Under Secretary of the Army James McPherson discussed Go as an insight into China’s strategic thinking in his keynote, and that sentiment was echoed later by Brandon Tseng, Shield AI’s chief operating officer.

“The Chinese are playing Go, which is about surrounding, taking more territory and surrounding your adversary,” said Tseng, speaking on a panel about AI and autonomous capabilities in support of competition.

Tseng went on to describe the role of AI as an answer to the problem of remotely piloted vehicles in denied environments. Finding a way for robots to move around electromagnetically denied environments is an undeniable part of the drive behind modern military AI and autonomy.

But we don’t need a Go board to explain that, or to cling to the misunderstood strategic thinking of the past. Thinking that Go itself will unlock China’s strategy is a line pushed by figures ranging from former House Speaker Newt Gingrich to former Secretary of State Henry Kissinger. The notion that the United States is playing chess (or, less charitably, checkers) while its rivals play Go has been expressed by think tanks, but it’s hardly a new idea. The notion that Go itself informed the strategy of rivals to U.S. power was the subject of a book published in 1969, as an attempt to understand how American forces were unable to secure victory in Vietnam.

In the intervening decades since Vietnam, humans and algorithms have gotten better at playing Go, but that narrow AI application has not translated into strategic insight. Nor should it. What is compelling about AI for maneuvering is not an unrelated AI in an unrelated field tackling a new game. What is compelling is the way in which AI can give opportunities to commanders on battlefields, and for that, there’s a whole host of games to study instead.

If the Army or industry wanted to, it could look instead and the limited insights from how AI is tackling Starcraft. But when it makes that leap, it should see it as a narrow artificial intelligence processing a game, not a game offering a window into a whole strategic outlook.

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In Artificial Intelligence