There is immense power in the cloud. It’s a cutesy colloquialism, one that calls to mind spring afternoons or fantastical floating fiefdoms, but it is also one of the great modern misnomers. All data stored in the cloud is really housed in some else’s computer. That’s one reason the Pentagon has been so reluctant to take advantage of the benefits of the cloud. The trade-offs in storing information outside the facilities it’s directly used in is likely fine for most casual or business cases, but it gets a little scary when it comes to the raw data of national security. Which is why the Pentagon is looking to entrust its data to a dedicated, defended cloud, contracted under the unsubtle term Joint Enterprise Defense Infrastructure, or JEDI. On March 13, nonpartisan coalition Open the Government released a report on Amazon and government secrecy, calling into question the contract and the process by which it will be awarded.

Like everything involving JEDI, there’s an extensive backstory. The cloud is essentially infrastructure, storage space that people and businesses pay for that allows them to access the same files from the internet, wherever they may be. That makes it somewhat different than traditional IT contracts, since it’s not exactly software and it’s not exactly hardware and, once adopted, it becomes the kind of thing future software and hardware depends on. That makes the contract itself, designed for a single supplier and an indefinite duration, worth $10 billion from the start.

Both the monetary value of the contract — and the intangible, inextricable way JEDI will be tied to future military operations — make it an important process to get right. The contract is expected to be awarded in April 2019, but it’s already drawn protest and legal action for how it is structured and for possible connections between the people awarded the contract and its likely recipients. Going single-supplier likely means granting the contract to either Google or Amazon, whose Amazon Web Services cloud houses much of the existing internet.

The Open the Government’s report, published March 13, examines Amazon’s existing contracts with parts of the federal government, including the Department of Homeland Security and the FBI. The report finds that, while privacy, transparency and accountability requirements for private contractors are already weak, they are especially so when it comes to Amazon and projects like JEDI. The nature of the cloud itself, and the logic powering algorithms and AI that run on cloud data, are inherently opaque.

To better understand the report, C4ISRNET spoke with Emily Manna, policy analyst with Open the Government and a co-author of the report. The below exchange has been lightly edited for clarity.

C4ISRNET: What prompted the report on Amazon?

EMILY MANNA: We started the research for this report by looking more broadly at private government contractors in the national security space, and it felt like all roads led to Amazon. Not only do they have cloud contracts with most federal agencies, they’re also pitching AI products to law enforcement and national security agencies that are generating lots of controversy. While the federal government is rife with revolving door and corporate capture issues, Amazon has garnered particular focus because of the JEDI contract process and HQ2. Amazon is also less transparency than many of its rivals, and works very hard to stymie public records requests.

C4ISRNET: Is there anything in particular about the JEDI contract that stands out as particularly unusual or ominous?

MANNA: The most noteworthy thing about the JEDI contract is its size, and the fact that it will go to only one provider. It certainly seems that former Defense Secretary [Jim] Mattis was enamored by what he learned from Amazon and Google when he visited their respective headquarters in 2017, and that Amazon’s advantage is bolstered by its ongoing cloud computing services for the intelligence community. The Oracle lawsuit has raised some serious ethical questions about potential conflicts of interest in DoD, and the Inspector General and FBI are now investigating whether the competition might have been unfair from the beginning.

C4ISRNET: How deep is the problem of a lack of transparency here?

MANNA: The secrecy goes very deep. In addition to the fact that complex AI systems are currently virtually immune to oversight because of the lack of explainability, the fact that much of this work is being carried out by private contractors like Amazon, Google, and other smaller firms harms transparency. Private companies don’t have nearly the same transparency requirements as government agencies, so a lot of the information about these contracts is protected as “trade secrets.” Finally, there’s just the larger problem of over-classification in the national security agencies.

C4ISRNET: Do we know enough to know what we don’t know?

MANNA: The short answer is “no.” We are almost completely in the dark about how the CIA and the rest of the intelligence community are using Amazon’s cloud computing services. We know only that CIA officials have bragged about the AI capabilities they have because of the contract with Amazon, but the possibilities for how they’re using AI are almost limitless, and some of those possibilities could be pretty scary if they have anything to do with counterterrorism and the drone program. DoD tells us quite a bit more about how they’re using and want to use this tech in the future, but we still don’t know much.

For example, we know generally about Project Maven and that DoD considers it to have been a huge success, but we don’t actually know anything about whether it actually worked. We know there were lots of civilian casualties resulting from U.S. airstrikes around the time Project Maven was operationalized, but we have no idea how AI may have played into those deaths.

C4ISRNET: This opacity seems fundamental to AI and Amazon. Are there some hard limits in how indecipherable AI can be? Would posting the code openly be a way to get oversight onto the functions of the AI?

MANNA: It certainly seems like open-source coding could help oversight bodies and the public better understand how government is using this technology — one wrinkle, of course, is the lack of technological expertise in Congress, due in large part to the fact that Congress just doesn’t have the resources to recruit expert staff and pay them competitively. But it does also seem like some of this may be inherent to complex AI and to working with private companies like Amazon. Congress should absolutely mandate that private contractors be more transparent to the public. When a private company is fulfilling government functions, there is a need for oversight and accountability that can’t just be written off for the sake of protecting proprietary information. Companies must also ensure that they know how their technology will be used before they sell it to government agencies, and that the technology is not immune to oversight. The federal government is understandably intent on staying ahead of the curve in the AI arms race, but they also need to be accountable to the public, and that means being able to assure that the technology they are using is predictable and explainable.

C4ISRNET: What of the likely counter-argument that transparency in, say, weapons AI is a security risk? Is this covered by the recommendation for a classified annex in reports?

MANNA: It’s, of course, true that some information will need to remain classified so as not to jeopardize national security. But right now, over-classification is a much bigger problem in the federal government, with experts estimating that between 50 and 90 percent of classified government information could safely be released to the public. Moreover, it’s possible that excessive secrecy about AI systems could be harmful to national security, because complex systems are often vulnerable to adversarial image attacks and other types of hacking. Many contractors providing services to DoD have already been hacked, including one of the AI firms that worked on Project Maven. Without robust oversight of these systems, there can’t be real accountability for these failures.

C4ISRNET: What should contractors know if they want to both win government, and especially Pentagon, AI contracts?

MANNA: Companies need to take responsibility for how their services will be used by the Pentagon and other national security agencies. Agencies should be able to give contractors information about the types of operations and programs their services will be used for, without giving away specific operational details. Contractors should not only release more information to the public, they should also stop trying to prevent the government from releasing information about their work to the public through FOIA, an unfortunately common practice.

C4ISRNET: What’s your sense of urgency in the timing of the report?

MANNA: The most urgent thing is that the time to begin regulating and demanding transparency about the military’s use of AI systems is now. The more AI becomes integrated into the day-to-day operations of the Pentagon, the harder it will be to determine where this technology is at work — and when it’s responsible for failures or abuses.

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In IT and Networks