NEW: Facebook Says It Exports Our Favorite Stories, Not Our Bad News

Facebook admits that when user posts are flagged as potentially harmful to its users, it removes only 44 percent of them in the U.S. and Canada. But how many do the company actually remove?

The company’s metrics for post removal have varied since July, according to leaked Facebook employee documents obtained by digital journalism platform 9to5Mac. On the one hand, CEO Mark Zuckerberg said in July that the company removes far fewer posts for its users than what appears in Facebook’s statistics. “Our aim is to detect and prioritize content that will raise the social well-being of our community,” he wrote in a July post.

But on Sept. 26, Facebook unveiled new statistics that appeared to show that Facebook’s guidelines were stricter than what its algorithms were already capable of. That same day, Zuckerberg had to apologize after Facebook users discovered the company’s own statistics were revealing that algorithms were failing to detect online child predators as well as Russian trolls from spreading misinformation, toxic content and hate speech. Facebook also said it fixed those issues, in part by using artificial intelligence.

When the leaked Facebook documents were brought to the attention of Editor in Chief of Fox News Digital Charlie Warzel, he speculated that the numbers were inconsistent between countries. “These numbers could be genuinely at odds with your beliefs, or a combination of skewed numbers and questionable decisions in countries like Canada, but my suspicion is the relatively low rate [of removal of posts] in the U.S. means that Facebook isn’t digging as deeply in the United States as it is in other countries,” Warzel wrote.

A database maintained by Research on Facebook and the ACLU shows that despite the decrease in the United States, the company removes far more posts from users in other countries than it does from them. Here is how the database indicates that Facebook is far less removing posts in some places than others.

In 2015, when more people worldwide were active on Facebook, the company removed 11 percent of its users’ posts, according to the ACLU. This number has decreased significantly, but not by as much as the U.S. percentage of removal.

On Sept. 26, in response to the controversy of its metrics, Facebook released a statement saying that its algorithm doesn’t flag posts as potentially harmful to users. “The AI tools that process the reports of reports to improve our detection of potential content that can be harmful to people on Facebook are not looking for content that is potentially harmful to people, but content that people report as harmful to people,” the statement said.

Leave a Comment