However long it appears the Pentagon has been trying to grapple Artificial Intelligence, the answer is that its been doing so for much longer than that.
“One of the most significant of these technologies is artificial intelligence (AI). Under development, for the past 30 years, Al is a family of technologies that use stored expertise and knowledge,” reads A Plan For The Application Of Artificial Intelligence To DoD Logistics, commissioned by the Logististics Management Institute on behalf of the Department of Defense.
I’ll get to the year that report was published in just a second, but first I wanted to share another excerpt which perhaps gives away the game.
“Experience has shown that AI technologies, when properly applied,” the report continues, “are key enabling technologies that will allow the Armed Forces to sustain and enhance their critical levels of warfighting capability in the environment of the 1990s.”
I’m Kelsey D. Atherton, reporting from Socorro, New Mexico, and I am here to talk about the long near-future of artificial intelligence.
‘A Plan for the Application’ was published in 1989. It is referencing work that started in the late 1950s, capturing in one fell swoop two generations of work, eerily reminiscent of similar reports published today.
The Congressional Research Service notes, in a report published November 2019, that “AI may have future utility in the field of military logistics”, specifically pointing to the use of AI for predictive maintenance.
Thirty years prior, ‘A Plan’ recommended that the Office of the Secretary of Defense “establish a DoD-wide policy on AI-based maintenance systems for newly acquired weapon systems and non-weapon capital equipment. Given the current state of technology, every weapon system procured by the Services should be delivered with intelligent maintenance aids.”
This was a near-term application, with the idea that it could be implemented in the next 1-3 years, or by 1992.
The potential for comparisons is exhaustive, though it takes a special kind of interest to enjoy stitching together disparate points from commissioned reports decades apart. If there is real value here, it is not in the what that stayed the same, but in the why.
Artificial Intelligence is a complex technology to grasp. It is putting decision making power into a machine, setting in motion processes that will yield useful results, and then trusting the humans on the other end of that processing to find what the machine rendered valuable. Specific AI, the kind developed for just one kind of input and one kind of processing, can be applied to virtually anything a company does, and so decades into the development of AI, there are still many seemingly basic tasks left to tackle.
There is also a secondary function, which is the moment a task performed by code becomes simple and widely executed, it stops being seen as AI and becomes just “software.” AI is, by custom and design, as much a marketing term for the edge of the possible, rather than a descriptive term for a suite of technical tasks already mastered..
When it comes to funding this research and development, the people mostly doing the acquisitions are not themselves technical experts. Pitching AI, and all that it encompasses, is the easier sell, and it’s what has kept AI as the edge of the future, from the 1950s to the 2020s.
It is entirely possible, by the time we reach the 2050s, there will still be white papers promising the benefits of greater adoption by AI.
These recommendations will be processed, absorbed, and inferred by machines. Summary algorithms will spit out Bottom Lines Up Front, and forward those recommendations to the virtual assistants of the flag officers in charge of allocating funding for widgets. Those virtual assistants will, in turn, summarize the recommendations on AI from AI for the generals, and as virtual headsets scan grizzled faces for micro-reactions, a slight nod will be all that is needed.
The human’s intent inferred, AI will approve funding for future AI.