Sophisticated phony videos called deepfakes have attracted plenty of attention as a possible threat to election integrity. But a bigger problem for the 2020 U.S. presidential contest may be “dumbfakes” — simpler and more easily unmasked bogus videos that are easy and often cheap to produce.

Unlike deepfakes, which require sophisticated artificial intelligence, audio manipulation and facial mapping technology, dumbfakes can be made simply by varying the speed of video or selective editing. They are easier to create and can be convincing to an unsuspecting viewer, which makes them a much more immediate worry.

A slowed-down video of House Speaker Nancy Pelosi that made her appear impaired garnered more than 2 million views on Facebook in May. In November, then-White House Press Secretary Sarah Sanders tweeted a sped-up video of CNN reporter Jim Acosta that made him look more aggressive than he was during an exchange with an intern. Her post received thousands of retweets.

The fact that these videos are made so easily and then widely shared across social media platforms does not bode well for 2020, said Hany Farid, a digital forensics expert at the University of California, Berkeley.

"The clock is ticking," Farid said. "The Nancy Pelosi video was a canary in a coal mine."

Responses to the more sophisticated deepfakes are constrained by challenging censorship decisions. Rep. Stephanie Murphy, D-Fla., authored a measure that would require the Director of National Intelligence to submit a yearly report on the avenues for the United States to respond to deepfakes, the capabilities of its rivals to generate deepfakes, and delegation of responsibility for countering rival activity. The amendment passed the House July 17 as a part of the annual Intelligence Authorization Act.

Social media companies, meanwhile, don’t have clear-cut policies banning fake videos, in part because they don’t want to be in the position of deciding whether something is satire or intended to mislead people — or both. Doing so could also open them to charges of censorship or political bias.

Facebook, however, will "downrank" false or misleading posts — including videos — so that fewer people will see them. Such material will also be paired with fact checks produced by outside organizations, including The Associated Press.

There are also vast gray areas depending on political affiliation or your sense of humor.

One social media user who calls himself Paul Lee Ticks — a play on the word “politics” — often makes fabricated videos, mostly of President Donald Trump. In one of his most recent video edits, he added a “concentration camps” sign to the Trump International Hotel & Tower in Chicago.

Another social media user who goes by the handle Carpe Donktum makes edited videos in support of the president. Following Trump’s June comments that Joe Biden appeared slow, Carpe Donktum slowed down video footage of Biden and spliced two clips, making the former vice president appear to say something he did not.

Trump often retweets Carpe Donktum and last week he met the president in person during the White House’s “social media summit” featuring conservatives. Carpe Donktum says he makes parody videos and disputes the notion that his videos are “doctored” because their intent is satirical and the manipulations obvious.

“These are memes and have been on the internet since the internet’s inception,” he said.

Both Paul Lee Ticks and Carpe Donktum, who spoke to the AP on the condition of anonymity due to fear of threats and harassment, started off making videos that were more simplistic and comical. But their videos have become more sophisticated, blurring the line between what is real and fake in a more convincing way for an audience that is unsuspecting or unfamiliar with their comedic style.

Concern about these videos is growing among experts, politicians and the general public.

During a House intelligence committee hearing on June 13, Rep. Adam Schiff, a California Democrat and the committee chairman, said the Pelosi video represents the scale of the problem ahead. According to a June Pew Research Center study, 63 percent of Americans surveyed about made-up news and information said that videos and images altered to mislead the public create a great deal of confusion around the facts of current issues.

Other manipulations are equally crude, yet more subtle. Some fake videos, for instance, mislabel authentic historical footage of public unrest or police activity with incorrect dates or locations to falsely suggest they depict breaking news.

“Disinformation is so powerful in our levels of political polarization,” said Ohio State University professor Erik Nisbet, who co-authored a study in 2018 that found fake news may have contributed to Trump’s 2016 win. “People are angry, worried and anxious. They are more vulnerable to misinformation and disinformation that validates their feelings.”

Demographics also play a role. Cliff Lampe, a professor at the University of Michigan, said older generations that were raised on mass media “tend to trust video more.” A study published in the Science Advances journal in January found that people over 65 and ultra-conservative were more likely to share false information.

Edward Delp, director of the Video and Imaging Processing Laboratory at Purdue University, and his team were able to develop an algorithm to detect deepfakes. Finding ways to protect and authenticate videos, he said, could help minimize the impact of manipulated video.

However, video authentication may do little to change people’s views. Farid, the UC Berkeley professor, said with the manipulated Pelosi video, users could easily find the original clips of the House speaker online but people were still willing to believe the false video was real.

“If we can’t get it right, I mean the public and Facebook, where are we going to be when we have more complex fakes?” he said.

Cal Pringle of C4ISRNET contributed to this report.

Share:
More In Information Warfare