In Releasing Previously Undisclosed Community Standards, Facebook Continues to Move in the Right Direction

In Releasing Previously Undisclosed Community Standards, Facebook Continues to Move in the Right Direction

The ongoing effort of tech giants like Facebook and Google to identify ways to mitigate or eliminate the deleterious effects their products have had on US democracy is a profoundly important story. For better or worse, the internet’s become the harbor for all of our information, and platforms like Facebook, Twitter, Medium, and Reddit are where much of our civic dialogue takes place. It’s important to recognize that our digital democracy is undergirded, in most cases, by private companies whose primary motive is profit and whose primary responsibility is to its shareholders. I pointed this out a few weeks ago but it’s worth mentioning again; business is, as the historian Richard Hofstadter noted in the 1960s, “the most powerful and pervasive interest in American life”; it dominates the national public discussion, its old-school embodiment sits in the White House, and the tech titans of today – guys like Mark Zuckerberg, Jack Dorsey, Elon Musk, Jeff Bezos, and Bill Gates – are less like the robber barons of the late 19th and early 20th century and more like the business leaders of a century earlier, who, as I wrote, “considered themselves a civilizing force.” In that era, business leaders “held office and patronized the arts and sciences” and “in their dealings in Europe and elsewhere, they felt a duty to represent the US as well as themselves.”

It’s clear that our schlubby white-guy tech titans feel that same pull toward civic engagement, but it’s also clear that, thanks to a 100-year-plus tradition of a sort of runaway capitalism, a system of trade that’s increasingly global and thus divorced, necessarily, from civic duty, they’ve got no real clue how to marry their companies back to their country in any meaningful way. There’s no strong, uninterrupted tradition of doing so.

Which is why the pace with which they’ve responded to the harmful effects their products have had on US democracy has actually been – despite criticism to the contrary – quite extraordinary. It’s been just seventeen months since the election, and already, Facebook, Google, and Twitter have rolled out new features, tools, and rules to combat the spread of misinformation on their platforms, dedicated hundreds of millions of dollars to local journalism projects, and have been generally receptive to criticism in a way they never had before. It may not be as fast as we’d like, and they may have erred in failing to foresee and then admit to the issues afflicting them (and us), but thanks to public pressure and their own desire to positively contribute, they’re starting to figure it out.

For whatever reason, Facebook’s transformation has been particularly painful and excruciatingly public. Last month I wrote about all of the ways that Facebook has undermined our democracy; since then, Mark Zuckerberg, the company’s founder and CEO, has gone on what’s basically an apology tour, claiming full responsibility for, among other things, the recent Cambridge Analytica scandal, and musing that he’d gotten it wrong years ago, philosophically speaking, when the company chose to prioritize data portability over privacy. Zuckerberg even, over a cringeworthy couple of days, testified before Congress (at which point it became clear that our congressional representatives have no real clue how Facebook works or how to resolve its issues), suggesting that, yes, Facebook should be regulated, although he wasn’t sure how.

Over the course of this month, Facebook has quietly made various tweaks to its platform targeting the ongoing transparency (for ads and news sources) and privacy (for users) issues dogging it. On April 6th, the company rolled out new requirements for political advertisers, along with increased transparency features around the ads they run – including a “View Ads” feature, that will let any user take a look at all of the ads a Page is running, regardless of whether they show up in your News Feed or not. On April 9th, it announced a new initiative, in partnership with various foundations, to study the effects of social media on democracy. A day later, the Data Abuse Bounty, an incentive program to “reward people who report any misuse of data by app developers,” was announced.

In addition to announcing tweaks and initiatives, Facebook has been posting frequently to its Hard Questions series this month. Among the topics covered: terrorism on the platform, the info that advertisers have on users, and the data Facebook collects on users when they’re not using the platform.

Today, Facebook took another step by publishing its internal enforcement guidelines on what is and isn’t allowed on the platform, and introducing the beginnings of a robust appeals process.

The 27 pages of guidelines, called “Community Standards,” walks content reviewers through six basic subject areas: Violence and Criminal Behavior, Safety, Objectionable Content, Integrity and Authenticity, Respecting Intellectual Property, and Content-Related Requested. As Monika Bickert, Facebook’s VP of Global Policy Management, notes, the company employs more than 7,500 content reviewers, who work 24 hours a day, seven days a week, in more than 40 languages. This is a massive effort, but it’s paltry compared to Facebook’s user base of more than 2 billion people, who share billions of posts on a daily basis – the reason Zuckerberg was so keen, in his media interviews and congressional testimony, on the use of artificial intelligence.

As Elizabeth Dwoskin and Tracy Jan report at the Washington Post, Facebook’s content reviewers have also really struggled at times to distinguish appropriate content from inappropriate content:

The company’s censors, called content moderators, have been chastised by civil rights groups for mistakenly removing posts by minorities who had shared stories of being the victims of racial slurs. Moderators have struggled to tell the difference between someone posting a slur as an attack and someone who was using the slur to tell the story of their own victimization.

In another instance, moderators removed an iconic Vietnam War photo of a child fleeing a napalm attack, claiming the girl’s nudity violated its policies. (The photo was restored after protests from news organizations.) Moderators have deleted posts from activists and journalists in Burma and in disputed areas such as the Palestinian territories and Kashmir and have banned the pro-Trump activists Diamond and Silk as “unsafe to the community.”

By making the guidelines its censors use available to the public, Facebook is opening itself up to a potentially productive feedback session which may result – let’s hope it does – in more effective guidelines and better censoring.

The new appeals process is also an incremental improvement – before today’s rollout, users had no way of protesting the removal of individual posts – and likely just the first step in a comprehensive appeals process that also implements, if Mark Zuckerberg has his way, a self-governance model a la Reddit. In an interview with Vox’s Ezra Klein a week before his congressional testimony, Zuckerberg explained his vision of a sort of a Supreme Court of users, independent of Facebook the company, that users could appeal to if they disagreed with a content decision made by censors. It’s a vision that’s aligned to that Global Community manifesto he penned back in February 2017, in which he imagined a fully borderless community confronting major existential issues like climate change and global poverty, using Facebook as its means of communication.

header image: "mark zuckerberg," jd lasica / flickr, thumbs up & facebook logo / facebook

James Comey Is an Altruist, Not a Moralist

James Comey Is an Altruist, Not a Moralist

China Flouts the Rules, But Tariffs Are Not the Answer

China Flouts the Rules, But Tariffs Are Not the Answer