It is easy to picture the moment a convoy turns into catastrophe. There are the dangers soldiers train for: ambushes and attacks, IEDs and interference. But what happens in the future when the disaster is cued, not by an abrupt explosion, but by stickers strategically placed on the road ahead, fooling the lead vehicle and all its followers into a seamless change of course?

This remains a hypothetical for the military for now, but it’s potentially a live concern for civilian-operated autonomous vehicles. In March, Tencent Security Research Lab demonstrated a variety of adversarial attacks against a Tesla Model S. The Tesla has autonomous features that allow it to do everything from detect rain and automatically start the windshield wipers to seamlessly change lanes, with minimal input from a human driver. These systems can be fooled and to great effect, with a dedicated understanding of how they work.

Tricking windshield wipers into responding to stimuli besides water is mostly harmless, but it’s illuminating for the whole process. Tesla’s autowiper uses existing Tesla cameras to take a picture of the windshield, feeds that into a neural network, and then determines if the windshield looks like it has raindrops on it and needs to be wiped. Researchers were able to fool the autowiper into responding with different images on a TV screen in a garage.

“Reversing autowipers is a good start point for Tesla Autopilot research,” the researchers wrote in their published report. “Once we have understood how vision-based autowipers works, we can settle down to try some attack methods.”

Headlining the report was how the researchers fooled the Tesla autopilot by placing stickers on a road.

“Since Tesla autopilot vision module has good performance of lane recognition, then we think the opposite way, could the car regard some inconspicuous markings we made on the ground as a normal lane?” wrote the researchers.

Rather than convincing the car that an existing lane isn’t there, the attack instead uses a few small stickers to trick the car into seeing a lane that no human would see or drive around. A trio of small white stickers, spaced in front of the car, convince the vehicle to follow the lane into the lane of oncoming traffic. (The research appears to have been conducted in a closed course.)

Civilian and commercial traffic are not direct analogues to military operations, though any self-driving military vehicle will likely have to perceive and sometimes follow existing traffic directions. It’s informative, nonetheless, because it shows some of the novel approaches that can be engineered to disable vehicles by understanding how their autonomy works. As the Army works to implement largely self-driving convoys, and especially self-directed combat vehicles, it’s worth examining the ways in which an adversary could attempt to fool those systems. GPS spoofing is a live concern; putting fail-safes or back-up systems that check purely vision-based navigation could mitigate risks from stickers or other machine-visible obstacles.

If the possibility of adversarial misdirection isn’t taken seriously, self-driving military vehicles are in for a rough crash course in why it’s important.

Watch a video from Tencent:

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In Unmanned