The White House and Facebook are working together to counter foreign influence operations, but research suggests social media sites need to be more aggressive in taking false content off their platforms.

The Department of Homeland Security hosted a conference call Aug. 6 with state election officials and Facebook, according to a statement. The conference call included a discussion of disinformation tactics used online and was part of the Trump administration’s plan to work with the private sector to protect elections.

“Strengthening collaboration between social media companies and federal, state, and local governments is critical to preventing foreign interference in our democratic processes, including elections,” Christopher Krebs, under secretary at Homeland Security, said in the release.

On July 31, Facebook removed 32 pages and accounts for a fake political influence campaign. Although it was not clear who was behind the pages, Facebook said the tactics were similar to those used by the Russian based Internet Research Agency.

“We continue to see a pervasive messaging campaign by Russia to try to weaken and divide the United States,” the Director of National Intelligence Dan Coats said during a White House press briefing Aug. 2.

But tackling disinformation requires more action from social media sites, a top researcher at Black Hat told Fifth Domain.

Misinformation needs to be tackled at the platform level, said Sara-Jayne Terp, a data scientist at AppNexus. She said one potential solution is to mandate two-factor authentication on social media sites. “If you’re running 1000 bots you need 1000 phones to authenticate them, and it gets a lot harder,” Terp said.

She also said that there is a financial disincentive for social media sites to take fake users off their platforms because it decreases their content and advertising budget.

"Its going to be painful,” Terp said, who will be presenting at the Black Hat conference. She added adding that social media sites should coordinate their actions to mitigate negative consequences. Still, Terp said that social media companies “are definitely trying,” to combat disinformation.

Facebook has struggled to take down misinformation and hate speech, even when the instances are seemingly clear.

The social media giant has pledged to improve its work in this area.

“Facebook is investing heavily in security so that we can find and address the threats posed by inauthentic actors, including by expanding our security teams and improving our artificial intelligence tools to detect and block fake accounts,” Kevin Martin, a vice president for public policy at Facebook, said in the Homeland Security statement.

Justin Lynch is the Associate Editor at Fifth Domain. He has written for the New Yorker, the Associated Press, Foreign Policy, the Atlantic, and others. Follow him on Twitter @just1nlynch.

More In Cyber