Here Are the Ways (So Far) that Facebook Has Undermined US Elections

Here Are the Ways (So Far) that Facebook Has Undermined US Elections

The truth is, the bigger your company/platform is, the easier it is to game. Facebook’s a case study in this. It’s struggled to anticipate threats, and seems to always be reacting. It famously did not care to protect its users’ personal data, until it did. Even when it claims to have fixed a problem, as ProPublica has found, the problem’s not necessarily fixed. The issue gets amplified when your founder and CEO and likely world-historical figure seems disinterested – sending his lawyer to testify before Congress, for example, or taking days to respond to a news story that’s grown legs and effectively hijacked the 24-hour news cycle. Facebook, in other words, has some structural issues, some PR issues, and some personality issues. These seem to be building toward a climax that’s not kind to the company, if the trending #deletefacebook hashtag on Twitter is any indication (as has been reported, even WhatsApp’s cofounder, Brian Acton – WhatsApp was purchased by Facebook in 2014 for $16 billion – has had enough). Shares in Facebook, as of this writing, have fallen 13 percent – its third worst week since its IPO in 2012.

Below are the ways Facebook has been used to undermine US elections.

A Russian troll factory linked to Kremlin spent about $100,000 to reach 126 million Americans.

In September, we learned that the Kremlin-backed Internet Research Agency purchased about 3,000 Facebook ads for about $100,000 between June 2015 and May 2016.  The ads were an eclectic collection of posts, focused, as Facebook wrote in an official statement, “on amplifying divisive social and political messages across the ideological spectrum – touching on topics from LGBT matters to race issues to immigration to gun rights.”


subscribe to the runoff, sinkhole's weekly briefing on national and international politics.


The ads were also unabashedly anti-Clinton, although they weren’t necessarily always pro-Trump: as POLITICO notes, some of the ads also promoted Bernie Sanders – even after his campaign had ended – and Green Party candidate Jill Stein.

In response, Facebook announced it had shut down the accounts and pages associated with the ads and was working on “technology improvements for detecting fake accounts and a series of actions to reduce misinformation and false news.” These included using machine-learning to detect false accounts, contracting with third-party fact-checkers, and facilitating improved news literacy through various projects, like the Facebook Journalism Project.

In November, we learned just how many Americans those posts had reached: thanks both to ad-targeting and virality, about 126 million US Facebook users – about half of all American of voting age – were exposed to at least one “inflammatory post” that originated in Russia.

In response to this massive revelation, Facebook’s top lawyer, while testifying before US lawmakers on Capitol Hill, argued that the impact was actually miniscule, when you consider all of the noise surrounding the 2016 election – Russian spending on these posts, for example, represented a fraction of one percent of all the money spent in the election – and pointed to ongoing efforts to establish stronger ties with the news industry and combat fake news on its platform.


Are giant tech companies like Facebook and Google platforms or publishers?


Lawmakers were not convinced, although no real action was taken by Congress at the time, beyond lots of hemming and hawing and making the posts in question available to the public.

In the aftermath of the revelations, ProPublica, the award-winning nonprofit investigative news organization, ramped up an ongoing investigative reporting series which explores the various ways that Facebook is used for discriminatory or predatory purposes. Among its findings: Facebook lets advertisers exclude users by race and, conversely, target hyper-specific users that display a bias, like “Jew haters”; the company has inconsistent rules about hate speech which “protect white men from hate speech but not black children”; and the company allowed scammers to boost politically-inflammatory posts linked to malware. In each of these cases, Facebook claims to have fixed the issue either before or after ProPublica contacted them. I’d very much suggest taking a look at these articles, because they illustrate both how easy it is to game the world’s largest social platform, and how disturbing it is, in terms of its effect on US democracy.

Russian trolls persuade unwitting Trump supporters to organize pro-Trump rallies.

As noted in the Special Counsel indictment of 3 Russian entities and 13 Russian nationals, Russian trolls used Facebook and Instagram (owned by Facebook) to organize political rallies during and after the election season, posing as “U.S. grassroots activists who were located in the United States but unable to meet or participate in person.” Business Insider details eight of these, including one, originally reported by the New York Times, in which a group of protesters in Houston rallying against “the threat of radical Islam” was challenged by a group of counter-protestors, both of which were organized by Russians.

CNN's Drew Griffin looks for US citizens who may have been convinced by Russian trolls to take part in the 2016 campaign.

Fake news.

A couple of studies illustrate the extent of Facebook’s fake news problem, the scope of which is sort of difficult to overstate.

A Buzzfeed News analysis conducted the week after the election found that “the top-performing fake election news stories on Facebook generated more engagement than the top stories from major news outlets.” In that timespan, the top 20 fake news stories generated about 8.7 million likes, reactions, and comments, while the top 20 real news stories generated just 7.3 million impressions. All but three of the top 20 fake news stories were, significantly, either pro-Trump or anti-Clinton.

Other Buzzfeed News analyses found that hyper-partisan Facebook pages were pushing false stories at alarmingly high rates – which is part of the reason why Facebook recently de-emphasized Page posts in its News Feed (in favor of Groups, which can be gamed pretty much the same way, but oh well) – and that much of the fake news was coming from more than 100 pro-Trump sites being run out of Macedonia.

A far more wide-ranging study, conducted by data scientists and published in Science earlier this month found that, “by every common metric,” as the Atlantic reports, “falsehood consistently dominates the truth on Twitter.” To reach this conclusion, the study’s participants analyzed “every major contested news story in English across the span of Twitter’s existence – some 126,000 stories, tweeted by 3 million users, over more than 10 years” – a massive undertaking that leaves very little doubt about its conclusions. And although the study was conducted on Twitter, its findings can be applied to other social media platforms, like Facebook and YouTube, that have similar business models and were plagued with similar problems.

Among the study’s findings: a false story reaches 1,500 people about six times faster than a true story does, on average, and, among the various categories, false political stories ranked highest in terms of their ability to diffuse faster than true news stories. Another surprising finding: bots played no real role in amplifying fake news. As the abstract explains: “Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.” (Emphasis mine.)

Quick analysis:

Both of these build on the work of previous studies, a handful of which I mentioned in my launch letter from the editor, and confirm two things we’ve long known: (1) new or unexpected information will catch our attention more effectively than anticipated or unsurprising info, and (2) stuff that confirms our beliefs and values will also catch and hold our attention much more effectively than info that challenges us.

What’s new, of course, is how profoundly seamless the information-sharing process has become, thanks to social media platforms. In the decade or so that Facebook and Twitter have been around, the information available has exponentially increased (as it did when the Internet was launched, too), and with so much info to choose from, people are falling back on the same selection strategies that allowed us to survive-and-flourish as a species: tribalism, confirmation biases, making decisions based on fear.

Cambridge Analytica, using a middleman, harvests the data of 50 million users to target voters.

The data analytics firm, a joint venture between Robert Mercer and SCL Group, played a key role in the Brexit campaign and Trump’s improbable 2016 victory, and now we know why: using a Facebook app developed by data scientist Aleksandr Kogan, Cambridge Analytica obtained the personal data of 50 million Facebook users, which it then used to build psychological profiles it could utilize to create hyper-targeted political ads.

This was legal, which is one of the issues: Kogan’s app announced it would be collecting personal data, and at the time, as the Washington Post reports, “Facebook also allowed developers to collect information on friends of those who chose to use their apps if their privacy settings allowed it.” That’s how the app, which was downloaded by about 270,000 people, was able to collect the information of 50 million users.



When Facebook learned of the “breach of trust” by Kogan in 2015 – the company had thought Kogan was collecting data for academic purposes – it banned his app. However, the story was just uncovered this week by the UK’s Guardian and Observer, and then it blew up thanks to a New York Times report that suggested Cambridge Analytica still possessed most or all of the data trove. Fuel was added to the fire when the UK’s Channel Four published video showing Cambridge Analytica CEO Alexander Nix bragging about the firm’s secret campaigns in elections worldwide, claiming they offer bribes and prostitutes to candidates to manufacture political dirt.

After days of silence, Mark Zuckerberg made the rounds on Wednesday, sitting down for interviews with Recode, CNN, and Wired, and writing on Facebook that the company would take steps to ensure personal data is protected moving forward. Those steps include: investigating apps that had access to large amounts of data before 2014, auditing apps with suspicious activity, reducing developers’ access to data, and providing a tool at the top of the News Feed that lets you manage permissions.

Zuckerberg’s charm offensive was not enough: there are an increasing number of calls from lawmakers for him to testify before Congress, even after the FTC opened an investigation. We’ll see what happens.

header image: "gold lock," mark fischer / "mark zuckerberg," alessio jacona / "binary code," christiaan colen (all from flickr) 

China Flouts the Rules, But Tariffs Are Not the Answer

China Flouts the Rules, But Tariffs Are Not the Answer

Trump’s Tariffs: Who, What, Where, and Why

Trump’s Tariffs: Who, What, Where, and Why