China has a national plan for it. Russia says it will determine the “ruler of the world.” The United States is investing heavily to develop it.

The race is on to create, control and weaponize artificial intelligence.

In Michael Kanaan’s book “T-Minus AI: Humanity’s Countdown to Artificial Intelligence and the New Pursuit of Global Power,” set for release Aug. 25, the realities of AI from a human-oriented perspective are laid out for the reader. Such technology, often shrouded in mystery and misunderstood, is made easy to comprehend through a discussion on the global implications of developing AI. Kanaan is one of the Air Force’s AI leaders.

The following excerpt, edited for length and clarity, introduces how, in late 2017, the conversation about artificial intelligence changed forever.

It was a Friday morning, Sept. 1, 2017, and not yet dawn when I stepped out of Reagan National Airport and followed my bag into the back of a waiting SUV. After flying east all night from San Francisco to D.C., I still had two hours before a Pentagon briefing with Lt. Gen. VeraLinn “Dash” Jamieson. She was the deputy chief of staff for U.S. Air Force intelligence and the country’s most senior Air Force intelligence officer, a three-star officer responsible for a staff of 30,000 and an overall budget of $55 billion.

As the Air Force lead officer for artificial intelligence and machine learning, I’d been reporting directly to Jamieson for over two years. The briefing that morning was to discuss the commitments we’d just received from two of Silicon Valley’s most prominent AI companies. After months of collective effort, the new agreements were significant steps forward. They were also crucial proof that the long history of cooperation between the American public and private sectors could reasonably be expected to continue. With the world marching steadfastly into the promising but unsettled fields of AI, it was becoming critical that Americans do so, if not entirely in harmony, then at least to the sounds of the same beat.

My apartment was only a short ride away. I was looking forward to a hot shower and strong coffee. But as the SUV pulled out of the terminal and into the morning darkness, a message alert pinged from my phone. It was a text from the general. Short and to the point, as usual. “See Putin comments re AI.”

A quick web search pulled up a quote already posting to news feeds everywhere. At a televised symposium broadcast throughout Russia only an hour earlier, President Vladimir Putin had crafted a sound bite making headlines around the globe. His unambiguous three sentences translated to: “Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

As the driver accelerated up the Interstate 395 ramp toward the city, a heavy rain started to fall, hitting hard against the car’s metal surfaces. Far off, through the window on my right, the dome of the Capitol building glistened in white light beyond the blurred, dark space of the Potomac River. Playing at background volume over the front speakers, a National Public Radio newscaster was describing a 3-mile-wide asteroid named Florence. Streaking past our planet that morning, the massive rock would be little more than 4 million miles away at its closest point — tremendously far by human standards, but breathtakingly near by the infinite scales of space. It was the largest object NASA had ever tracked to pass so closely by our planet. On only a slightly different trajectory, it would have altered Earth’s entire landscape. And, like for the dinosaurs before us, it would have changed everything. It would have changed life. “A perfect metaphor,” I thought, “impeccably timed to coincide with Putin’s comments about AI.”

I looked back at his words. The message they carried rang like an alarm I didn’t need to hear, but the motivation behind them wasn’t so clear. Former KGB officers speak carefully and only for calculated reasons. Putin is no exception. His words matter, always. And so does his purpose. But what was it here? Just to offer a commentary or forecast? No. Not his style. A call to action, then, to energize his own population? Perhaps. But, more than that, this was a statement to other statesmen, a confirmation that he and his government were awake and aware that a sophisticatedly deep effort was underway to accomplish a new world order.

Only a month earlier, China had released a massive three-part strategy aimed at achieving very clear benchmarks of advances in AI. First, by 2020, China planned to match the highest levels of AI technology and application capabilities in the U.S. or anywhere else in the world. Second, by 2025, they intend to capture a verifiable lead over all countries in the development and production of core AI technologies, including voice- and visual-recognition systems. Last, by 2030, China intends to dominantly lead all countries in all aspects and related fields of AI. To be the sole leader, the world’s unquestioned and controlling epicenter of AI. Period. That is China’s declared national plan.

With the Chinese government’s newly published AI agenda available for the world to see, Putin’s words resolved any ambiguity about its implication. True to his style, his message was clear and concise. “Whoever becomes the leader … will become the ruler of the world.”

“Straightforward,” I thought. “And he’s right.” But focused administrations around the globe already know the profound potential of AI. The Chinese clearly do — it’s driving their domestic and foreign agendas. And the Saudis, the European Union nations, the U.K., and the Canadians — they know it, too. And private enterprise is certainly focused in, from Google, Facebook, Amazon, Apple and Microsoft to their Chinese state-controlled counterparts — Baidu, Alibaba, Tencent and the telecom giant Huawei.

AI technologies have been methodically evolving since the 1960s, but over most of those years, the advances were sporadic and relatively slow. From the earliest days, private funding and government support for AI research ebbed and flowed in direct relation to the successes and failures of the latest predictions and promises. At the lowest points of progress, when little was being accomplished, investment capital dried up. And when it did, efforts slowed. It was the usual interdependent circle of cause and effect. Twice, during the late ’70s and then again during the late ’80s and early ’90s, the pace of progress all but stopped. Those years became known as the AI winters.

But, in the last 10 to 15 years, a number of major breakthroughs, in machine learning in particular, again propelled AI out of the dark and into another invigorated stage. A new momentum emerged, and an unmistakable race started to take shape. Insightful governments and industry leaders began doing everything possible to stay within reach of the lead, positioning themselves for any possible path to the front.

Now, for all to hear, Putin had just declared everything at stake. Without any room for misunderstanding, he equated AI superiority to global supremacy, to a strength akin to economic or even nuclear domination. He said it for public consumption, but it was rife with political purpose. “Whoever becomes the leader in this sphere will become the ruler of the world.”

Those words would undoubtedly add another level of urgency to the day’s meetings. That was certain. I redirected the driver to the Pentagon and looked down at my phone to answer the general’s text. “Landed. Saw quote. On my way in.”

The shower would have to wait.

***

In the months that followed, Putin’s now infamous few sentences proved impactful across continents, industries and governments. His comments provided the additional, final push that accelerated the planet’s sense of seriousness about AI and propelled most everyone into a higher gear forward. Public and private enterprises around the globe reassessed their focuses and levels of commitment. Governments and industries that had previously dedicated only minimal percentages of their research and defense budgets to the new technology suddenly saw things differently. It quickly became unacceptable to slow-walk AI efforts and protocols, and no longer defensible to incubate AI innovations for longer than the shortest time necessary.

Now, not long after, the pace of the race has quickened to a full sprint. National strategies and demonstratable use have become the measurements that matter. Rollouts have become requisite. To accomplish them, agendas are more focused, aggressive and well funded. Sooner than many expected, AI is proving itself a dominant force of economic, political and cultural influence, and is poised to transform much of what we know and much of what we do. China, Russia and others are utilizing AI in ways the world needs to recognize. That’s not to say all efforts and iterations in the West are without criticism. They’re not. But if this new technology causes or contributes to a shift in power from the West to the East, everyone will be affected. Everything will change.

The future is here, and the world ahead looks far different than ever before.

No longer just science fiction or fantastic speculation, artificial intelligence is real. It’s here, all around us, and it has already become an integral and influential part of our lives. Although we’ve taken only our first few steps into this new frontier of technological innovation, AI is providing us powerful new methods of conducting our affairs and accomplishing our goals. We use these new tools every day, usually without choice and often without even realizing it — from applications that streamline our personal lives and social activities to business programs and practices that enable new ways of acquiring a competitive advantage. I’ve learned a lot about the common misperceptions and misgivings people have when trying to understand AI. Most conversations about artificial intelligence either begin or end with one or more of the following questions:

  1. What exactly is AI?
  2. What aspects of our lives will be changed by it?
  3. Which of those changes will be beneficial and which of them harmful?
  4. Where do the nations of the world stand in relation to one another, especially China and Russia
  5. What can we do to ensure that AI is only used in legal, moral and ethical ways?

Although the answers to those questions merit long discussions and are open to differing opinions, they should at least be manageable and factually accurate. The topics shouldn’t be too difficult to discuss or debate — not conversationally or even at policymaking or political levels. Unfortunately, they generally are.

But the conversational disconnects that usually occur aren’t because of some complex technical details or confusing computer issues. Instead, it’s usually, simply, because of the same old obstacles that too often stand in the way of many other conversations. Regardless of the topic, and even when it matters most, we too frequently speak below, above, around or past one another — especially when we don’t have an equal amount of information, a shared base of knowledge or a common set of experiences. In those instances, we make too many assumptions, allow too many things to go without saying and use too many words that hold different meanings for different people. In short, too many confusions are never clarified and too many more are created. As a consequence, we’re doomed for frustration and failure from the start, inevitably unable to understand one another and incapable of appreciating each other’s perspectives and talking points. My goal throughout this book is to avoid those pitfalls.

The best way to start is to first address the most common misperceptions of all, the ones we tend to bring with us into the AI conversation. The first of these is the assumption that AI is unavoidably destined, sooner or later, to develop its own consciousness and its own autonomous, evil intent. For that idea, we can thank science fiction and the entertainment industry. Make no mistake, I’m an ardent fan of science fiction, both on screen and in books. Without any doubt, the sci-fi genre has given us fine works of imagination, insight and art. Many great fiction writers and filmmakers are extremely knowledgeable about technology and conscientiously concerned about our future. Time and again they’ve proven themselves true visionaries, and we’re unquestionably better off for their work. They spark our curiosity, ignite our imaginations, increase our appetite for knowledge, and encourage our interests in science and societal issues.

But when it comes to their scientific portrayals of artificial intelligence, our most popular authors and screenwriters have too often generated an array of exotic fears by focusing our attention on distant, dystopian possibilities instead of present-day realities. Science fiction that depicts AI usually aligns a computer’s intelligence with consciousness, and then frightens us by portraying future worlds in which AI isn’t only conscious, but also evil-minded and intent, self-motivated even, to overtake and destroy us. To create drama, there has to be conflict, and the humans in these stories are almost always overwhelmed and outmatched, naturally unable to compete against the machines’ vastly superior intelligence and mechanical strength. Iconic movies like “2001: A Space Odyssey,” “The Matrix,” “The Terminator,” “Ex Machina,” and “I, Robot,” along with television series such as “Westworld” and “Black Mirror,” have turned our underlying fears and suspicions into deep-seated and bleak expectations.

Even today, commercial companies that offer AI products and consumer services routinely have to fight our distrust of intelligent machines as a basic, necessary part of their regular marketing efforts. Just think of all the television commercials for AI-enabled products we now see, and consider how many of them are focused first on trying to put us at ease by casting a polite and gentle glow to the figurative, artificial face of their AI, even when that face has absolutely nothing to do with the services their products actually provide.

AI is an extremely powerful tool, and it has immense implications we must consider and evaluate carefully. It’s a very sharp instrument that shouldn’t be callously wielded or casually accepted, especially when it’s in the wrong hands or when it’s used for intentionally intrusive or oppressive purposes. These are serious issues, and there are significant steps we must take to ensure AI is properly designed and implemented. Fortunately, and contrary to what many people think, it’s not necessary to have a background in computer science, mathematics or engineering in order to very meaningfully understand AI and its technological implications. With just a basic comprehension of a few fundamental concepts behind today’s computers and related sciences, it’s entirely possible to connect the relevant dots and understand the overall picture.

Creating tools to facilitate our lives is the strength of humankind. It’s what we do. Given enough time, it was arguable, perhaps even inevitable, that we would create the ultimate tool — artificial intelligence itself. But what exactly does it mean that we’ve accomplished that task? And how is AI even possible? In large part, the answers lie in the history of ourselves and of our own biological intelligence. It turns out that artificially replicating what we know about the human thought process, at least as best we can, is a highly effective blueprint for creating something similar in a machine. It’s our own evolution and our own history that teach us the fundamentals that make it all possible.

Share:
More In Opinion