top of page
Writer's pictureModerator Guide

Are Content Moderation Policies of Facebook Trustworthy?

The claim of whitelisting influential people who break the Content Moderation rules of Facebook is a LIE! Once again the social media giant is raising questions as to why it can't be trusted to self-report how it moderates content.


Facebook has a distinct, unequally permissive content moderation process that solely applies to high-profile users, about which the company has lied for years. Another example of why social media platforms cannot be trusted to self-report and self-regulate is a recent Wall Street Journal exposé on Facebook's "Cross Check" or "XCheck" programme. It is necessary to have independent oversight.


For those of us who have spent years comparing what Facebook says to what it does, the fact that the business has a get-out-of-jail card for high-profile users who break its community standards was unsurprising.


Facebook has emphasized its dedication to "providing people a voice, keeping people safe, and treating people equitably," in an attempt to present itself as philanthropic. The company's actions, on the other hand, are mostly dictated by business priorities: internal records show that enraging powerful users is "PR dangerous" and bad for business.


The lengths to which Facebook was ready to go to conceal the true nature of the Cross-Check initiative — intentionally misleading the public and potentially damaging its Oversight Board, a multimillion-dollar gamble on self-regulation — was unexpected.


In 2018, Facebook claimed to have one set of rules for all users in its lone blog post about Cross-Check: "We want to make clear that when information violates our standards, we remove it from Facebook, regardless of who posted it." There are no unique safeguards for any particular group... To be clear, checking something on Facebook does not guarantee that the profile, Page, or information will not be removed. It's just done to ensure the accuracy of our conclusion."


In January 2021, during the board's review of President Trump's suspension from the site, Facebook restated this untruth, claiming that all users are subject to the "same general guidelines." In response to the board's request that the company gives more information about the program, the company said that increased transparency was "not practical" because Cross Check is utilized in only a "small percentage" of cases. According to the Wall Street Journal, Facebook uses Cross Check on millions of accounts, shielding some high-profile users from enforcement measures totally while leniently punishing others.

The Oversight Board declared that it will investigate whether Facebook was "completely forthright in its responses in relation to cross-check, including the practice of whitelisting."


Over the last few years, Facebook has purposely redirected public attention away from Cross-Check and toward its newsworthiness policy in regards to its handling of powerful individuals. Cross Check, for example, is not referenced in any of the three publications from the company's civil rights audit, despite the fact that several of the documents outline the company's newsworthiness policies. The public, regulators, and human rights organizations have been denied meaningful engagement with the firm on its rules for high-profile accounts as a result of this.


The success of Facebook's deception is proven by the fact that "newsworthiness" or "public figure" appear more than 300 times in the 7,656 published public comments the Oversight Board received in regard to Trump's suspension, but "XCheck" and "Cross Check" do not appear at all. Read all the published public comments.


Facebook strictly controls what information it provides and who has access to it, despite being more transparent than other platforms such as YouTube. The firm has emphasized its research partnerships as proof of its commitment to transparency, but subsequent events have called into question Facebook's commitment to those partnerships.


According to the New York Times, Facebook offered data to researchers examining misinformation on the platform that only covered approximately half of US users — those with obvious political viewpoints — rather than all users, as Facebook claimed. Meanwhile, Facebook's CrowdTangle transparency tool, which academics use to track what people are saying, has lost thousands of posts regarding the January 6 uprising.


While some events may be due to human error, Facebook has also intentionally restricted access to some outside researchers, making it impossible to obtain unbiased views of how the network is being used. Following multiple scathing news reports, academics monitoring political advertising at NYU Ad Observatory were suspended from the platform in August. Facebook also recently made changes to its news feed that make it more difficult for watchdogs to conduct large-scale audits of the site. The CrowdTangle team was disbanded by Facebook in April, raising concerns that the technology would be phased off in the future.


Although it is evident that Facebook cannot be trusted to self-report, it is vital that the general public understands how the platform moderates content. We share ideas, acquire news, interact with others, and discuss current events on social media, which has become the new public square. Because social media has radically altered how we interact, equitable access to it is critical. When firms like Facebook publicly declare their dedication to free speech and equality while surreptitiously censoring online discourse in a way that perpetuates existing power structures, civil freedoms, such as freedom of expression and the right to assemble, are jeopardized.


It exposes vulnerable groups with less political clout to online and offline abuses such as being silenced online owing to hate speech, harassment, or over-removals, as well as doxing or violence.


Content moderation is difficult and complicated, yet policymakers and researchers can't suggest significant regulation or remedies unless they have trustworthy data. Platforms cannot be relied upon to report their own activity. Transparency must be made a legal requirement, and it should apply to all social media sites, not just Facebook.



12 views0 comments

Comments


bottom of page