Modern airplanes are, like so much else, bundles of metal wrapped around code. So when news got out Wednesday afternoon that Boeing had been hit by the Wannacry worm, a brief panic and hysteria swept the aviation world: did the worm get into aircraft software? Could a worm in aircraft software demand ransom in flight? Would pilots have to shell out a ransom in bitcoin mid-flight in order to land? Fortunately, it appears that Boeing patched and quarantined the attack, so none of this came to pass, but it’s as good an opportunity as any to talk about what happens when the fifth domain meets the third domain.
First, let’s clear up what happened at Boeing. From Dominic Gates at Seattle Times, who broke the story:
Earlier in the day, when the cyberattack struck, the reaction was anything but calm.
The Wannacry virus made headlines in May 2017 when it hit hospitals in the UK, replacing vital displays with a message that files on the computer were encrypted and would be destroyed unless a ransom was paid (in Bitcoin, of course). The 2017 attack was halted when a security researcher registered the domain programmed into the worm as a killswitch, which then promptly stopped that attack. Why did the worm have a killswitch? Rather than a singularly built malicious tool, WannaCry was based on EternalBlue, a Microsoft discovered by the NSA and kept secret until it was stolen and exposed by Shadow Brokers, a hacking group, in early 2017. Those releases almost immediately found their way into the repertoires of Russia and China and other nations with extensive cyber capabilities. In December 2017, the United States pinned the May WannaCry attack on North Korea.
WannaCry is hardly the first or last bit of code fashioned into a weapon of industrial sabotage. Once inside a network, it can can spread until either it checks the killswitch domain or until the software is patched, as happened at Boeing on Wednesday. The latter part is especially important, because not every EternalBlue exploit or WannaCry variant will include the killswitch.
With the attack so recent, we are still a ways away from a clear analysis of who launched the attack, what systems specifically they targeted, and how the attackers got in. Diagnosing, building adaptations, and labeling attackers can sometimes take years; earlier this month, US-CERT formally published attribution and countermeasures to a years-long cyber intrusion from Russia that started in 2016 and which targeted, among other sectors, the aviation industry.
What can be done? Patching, for one thing. There is an understandable lag to patching critical equipment, which was one of but not the only problems with the UK attack. With industrial equipment, there’s less of an excuse, especially since Microsoft released and pushed patches to counter EternalBlue shortly after the company learned of the vulnerabilities. There’s another, nested danger in hoarding cyber exploits: for as useful as a given exploit is, the longer it goes unshared, the longer it goes unpatched, and that leaves companies using the compromised software susceptible to attack until the exploit is made public.
And we can expect these attacks to keep coming. As Greg Otto laments at CyberScoop:
We are lucky that this week’s attack appears contained, and we are also lucky that the worm attacked industrial equipment, rather than working its way into actual plane software. Production in a factory can be halted, computers on the ground pulled offline and patched. In the air? That’s less of an option. And unless every plane adopts a fully air-gapped security protocol like the fictional Galactica, those problems will become a nightmare to troubleshoot.
Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.