Artificial intelligence is rapidly transforming all sectors of our society. Whether we realize it or not, every time we do a Google search or ask Siri a question, we’re using AI.

For better or worse, the same is true about the very character of warfare. This is the reason why the Department of Defense – like its counterparts in China and Russia– is investing billions of dollars to develop and integrate AI into defense systems. It’s also the reason why DoD is now embracing initiatives that envision future technologies, including the next phase of AI – artificial general intelligence.

AGI is the ability of an intelligent agent to understand or learn any intellectual task in the same way that humans do. Unlike AI which relies on ever-expanding datasets to perform more complex tasks, AGI will exhibit the same attributes as those associated with the human brain, including common sense, background knowledge, transfer learning, abstraction, and causality. Of particular interest is the human ability to generalize from scanty or incomplete input.

While some experts predict AGI will never happen or at the very least is hundreds of years away, these estimates are based on the approach of simulating the brain or its components. There are many possible shortcuts to AGI, many of which will lead to custom AGI chips to increase performance in the same way that today’s GPUs accelerate machine learning.

Accordingly, an increasing number of researchers believe that sufficient computer power already exists to achieve AGI. Although we know generally what parts of the brain do, what is missing, though, is the insight into how the human brain works to learn and understand intellectual tasks.

Given the amount of research currently underway – and the demand for computers capable of solving problems related to speech recognition, computer vision, and robotics – many experts predict AGI emergence is likely to gradually occur within the next decade. Nascent AGI capabilities are continuing to develop and at some point will equal human capabilities.

But with the continuing increase in hardware performance, subsequent AGIs will vastly exceed human mental abilities. Whether this means “thinking” faster, learning new tasks more easily, or evaluating more factors in decision making remains to be seen. At some point, though, the consensus will be that AGIs have exceeded human mental abilities.

Initially, there will be very few true “thinking” machines. Gradually, though, these initial machines will “mature.” Just as today’s executives seldom make financial decisions without consulting spreadsheets, AGI computers will begin to draw conclusions from the information they process. With greater experience and complete focus on a specific decision, AGI computers will be able to reach the correct solutions more often than their human counterparts, further increasing our dependence on them.

In a similar manner, military decisions will begin to be made only in consultation with an AGI computer, which will gradually be empowered to evaluate competitive weaknesses and recommend specific strategies. While the science fiction scenarios in which these AGI computers are given complete control over weapons systems and turn on their masters are highly unlikely, they undoubtedly will become integral to the decision-making process.

We will collectively learn to respect and lend credence to the recommendations AGI computers make, giving them progressively more weight as they demonstrate greater and greater levels of success.

Obviously, AGIs’ early attempts will include some poor decisions, just as any inexperienced person’s would. But in decisions involving large amounts of information which must be balanced, and predictions with multiple variables, the computers’ abilities – wedded to years of training and experience – will make them superior strategic decision-makers.

Gradually, AGI computers will come to have control over greater and greater portions of our society, not by force but because we listen to their advice and follow it. They will also become increasingly capable at swaying public opinion through social media, manipulating markets, and being even more powerful at engaging in the kinds of infrastructure skullduggery currently attempted by today’s human hackers.

AGIs will be goal-driven systems in the same ways humans are. While human goals have evolved through eons of survival challenges, AGI goals can be set to be anything we like. In an ideal world, AGI goals would be set for the benefit of mankind as a whole.

But what if those initially in control of the AGIs are not benevolent minders who seek the greater good? What if the first owners of powerful systems want to use them as tools to attack our allies, undermine the existing balance of power or take over the world? What if an individual despot gets control of such AGIs? This obviously would pose an extremely dangerous scenario which the West must begin to plan for now.

While we will be able to program the motivations of the initial AGIs, the motivations of the people or corporations that create those AGIs will be beyond our control. And let’s face it: individual humans, nations and even corporations have historically sacrificed the long-term common good for short-term power and wealth.

The window of opportunity for such a concern is fairly short, only within the first few AGI generations. Only during that period will humans have such direct control over AGIs that they will unquestioningly do our bidding. Subsequently, AGIs will set goals for their own benefit which will include exploration and learning and need not include any conflict with humanity.

In fact, except for energy, AGI and human needs are largely unrelated.

AGIs won’t need money, power, territory, and need not even be concerned about their individual survival—with appropriate backups, an AGI can be effectively immortal, independent of any hardware it is currently running on.

For that interim, though, there is a risk. And as long as such risk exists, being the first to develop AGI must be a top priority.

Charles Simon is the CEO of FutureAI, an early stage technology company developing algorithms for AI. He’s the author of “Will Computers Revolt? Preparing for the Future of Artificial Intelligence,” and the developer of Brain Simulator II, an AGI research software platform, and Sallie, a prototype software and artificial entity that learns in real-time with vision, hearing, speaking and mobility.

Have an Opinion?

This article is an Op-Ed and the opinions expressed are those of the author. If you would like to respond, or have an editorial of your own you would like to submit, please email Federal Times Senior Managing Editor Cary O’Reilly.

Share:
More In Cyber