Skip to main contentSkip to navigationSkip to navigation
Facebook’s elections ‘war room’ in Dublin, Ireland
Facebook’s elections ‘war room’ in Dublin, Ireland. Photograph: Reuters
Facebook’s elections ‘war room’ in Dublin, Ireland. Photograph: Reuters

Inside Facebook's war room: the battle to protect EU elections

This article is more than 4 years old

The social media firm is deleting billions of fake accounts as it takes on a torrent of fake news, disinformation and hate speech

Less than three years ago, Facebook’s chief executive, Mark Zuckerberg, dismissed as “crazy” the idea that fake news on his platform could have influenced the election of Donald Trump as US president.

Today the company admits it is under siege from billions of fake accounts trying to game its systems to win elections, make money or influence people in other ways, and battling a tsunami of fake news, disinformation and hate speech.

Defeating them has become a matter of corporate survival, and Facebook wants users and regulators to know that it has stepped up those efforts. It also wants them to believe it is turning the tide.

This week it took more than a dozen journalists to the Dublin “war room” at the heart of its efforts to protect European elections, to show off the resources it is pouring into protecting the continent-wide vote.

Until the 23 May poll, and for several days after, about 40 people will be hunched over screens around the clock, monitoring the shifting pace of online conversation, looking for signs of manipulation, fake news or hate speech. They are backed up by a global network including threat intelligence experts, data scientists, researchers and engineers.

Native speakers in all 24 official EU languages are also part of the team, said Lexi Sturdy, who has flown in from the US to run the election protection, after managing a similar operation for the American mid-term elections.

The scale of the challenge facing Facebook, as it tries to clear “bad actors” from the system, is staggering. Richard Allan, the company’s vice-president for public policy, said the company took down 2.8bn fake accounts between October 2017 and November 2018.

In addition to those fake accounts, there are real accounts that are sharing fake news, intentionally spreading disinformation or promoting hate speech. The company has also started vetting people who want to post political advertisements, and committed to keeping libraries of campaign ads online for seven years.

But despite the resources poured into tackling attempts to manipulate voters through the platform, from false advertising to spreading hate speech, Facebook is still struggling to root out the people and networks that it calls bad actors.

Journalists and activists in the last month alone uncovered far-right networks in Spain that reached nearly 1.7 million people, discovered ads posted by the Trump campaign in the US that violated Facebook’s own rules, and revealed an “astroturf” campaign of ads supporting hard Brexit that purported to be a grassroots campaign but were coordinated by a veteran political operative.

None were spotted by Facebook’s own tools. Nathaniel Gleicher, its head of cybersecurity policy, said these were constantly improving but admitted that the company did not have the capacity to fully protect elections.

“In a situation like that, no single organisation can tackle it by themselves,” he said of interference and fake news. Journalism and activism would be needed to bolster the company’s own efforts, he added.

“Obviously I would like to be able to catch every single operation ourselves first,” he said. “But the reality of security is, you need as many people focused on the problem as possible.”

He detailed the broad outline of the company’s two-pronged international approach to stopping abuse. Facebook aims to use artificial intelligence to make it harder to game the system, and to speed up efforts to remove those who do break the rules. The aim is “to force the bad actors to spend their time trying to defeat the filter, rather than trying to drive their messages”.

Facebook declined to give any examples of where it had intervened to stop people targeting European elections. But Sturdy said previous successes included identifying a spike in hate speech in Brazil after the first round of the presidential election last year; within an hour, a new meme was taken down.

In the US midterms, Facebook’s automated systems identified 90% of voter suppression attempts that it would go on to remove, with only 10% flagged up by users, she said.

But the failings of these automated systems – there is no way to know, for example, how many voter suppression efforts may have escaped both human and automated systems – may put votes in smaller or poorer countries particularly at risk.

In places where media and activists are under pressure, Facebook has not made clear who might provide additional checks that have caught bad actors elsewhere.

One gap in European checks was flagged up by the Hungarian journalist Márton Gergely when he asked executives: “Why don’t you have fact-checkers in Hungary?” Major news sites, including one allied with the government, have been criticised for publishing fake news, but Allan said the company had not found a credible partner.

“It’s not that we don’t want to have fact-checkers in any particular country,” he said, asking for suggestions. That protects Facebook’s reputation for balance, but potentially leaves voters in Hungary particularly vulnerable to fake news.

Beyond the European elections, Allan promised that there would be some protection for all votes around the world, but declined to give even the broadest-brush guarantees of what basic safeguards might look like.

“There will be different measures put in place for different countries depending on the threat profile,” he said, raising the alarming prospect of a system of opaque tiers of safeguarding. “As much as we can, we will be aiming to protect all elections around the world.”

Most viewed

Most viewed