WASHINGTON — ChatGPT, a bot launched by OpenAI in November that produces human-like conversations and content, including surreal art and computer code, has caught the eye of U.S. defense officials.
And the tech that underpins the viral bot, generative artificial intelligence, was recently added to a Defense Information Systems Agency watch list, according to Chief Technology Officer Stephen Wallace.
“We’ve heard a lot about AI over the years, and there’s a number of places where it’s already in play,” he said Jan. 25 at an event hosted by a chapter of AFCEA, a communications-and-electronics interest group, at the Army-Navy Country Club in Arlington, Virginia. “But this sort, the ability to generate content, is a pretty interesting capability.”
The watch list, regularly refreshed, has in the past featured topics that later became pillars of defense connectivity and security, such as 5G, zero-trust digital defense, quantum-resistant cryptography, edge computing and telepresence.
“We’re starting to look at: How does [generative AI] actually change DISA’s mission in the department and what we provide for the department going forward,” Wallace said.
ChatGPT, applauded by some for its potential to augment worker productivity and spurned by others over questions of bias and ethics, surpassed 1 million registered users within a week of its launch. The easily accessible platform has shown “a huge swath of the population” the power and pitfalls of AI, according to Bill Drexel, an associate fellow of technology and national security at the Center for a New American Security.
“While it’s obviously not a military system, per se, I think that that growing exposure of these kinds of incidental, often corporate-driven enterprises is really raising the awareness of what can go right and what can go wrong with these tools,” he told Defense News at a separate livestreamed event on Jan. 26.
Exactly how generative AI might be applied at the Pentagon is unclear, and Wallace did not provide specifics. Sam Altman, CEO of OpenAI and creator of ChatGPT, met this week with lawmakers in an attempt to demystify the tool, Semafor reported.
The U.S. military is spending more and more on AI and related tech as a means to improve battlefield analysis and predict maintenance needs, among other applications.
The Pentagon’s public spending on AI, including autonomy, ballooned to $2.5 billion in 2021 from a little more than $600 million in 2016. More than 685 AI projects, including several tied to major weapons systems, were underway as of early 2021, according to the Government Accountability Office.
And in November, Air Force Chief Information Officer Lauren Knausenberger said the service must “automate more” to remain dominant in a world of advanced computing and lightning-fast decision-making.
Colin Demarest is a reporter at C4ISRNET, where he covers military networks, cyber and IT. Colin previously covered the Department of Energy and its National Nuclear Security Administration — namely Cold War cleanup and nuclear weapons development — for a daily newspaper in South Carolina. Colin is also an award-winning photographer.