Facebook is routinely taking down posts for suspected spam, hate speech, and bullying. But how often does the company get the content takedowns wrong? Well, now you can find out.
For the first time, Facebook is revealing stats around the appeals it receives on content takedowns. "Our enforcement isn't perfect and as soon as we identify a mistake, we work to fix it. That's why we are including how much content was restored after it was appealed," Facebook VP Guy Rosen said in a blog post on Thursday.
The stats, which can be found in company's latest community standard enforcement report, reveal that during this year's first quarter, Facebook restored more than 80,000 posts that were mistakenly removed as harassment. Most notably, all these posts were only restored after a user appeal was made.
In the first quarter, Facebook also restored over 130,000 pieces of content that were incorrectly flagged as hate speech. However, when the company takes down a piece of content for a violation, it usually gets the call right—at least according to the company's stats. For instance, on graphic violence content, the company received 171,000 user appeals challenging the takedowns. But only 23,900 pieces of content were restored following the appeal. (Another 45,900 posts were restored after Facebook itself detected the mistake.)
More From PCmag
The stats illustrate how Facebook's content moderation can be hit or miss. A big reason why is the company's AI-powered content flagging mechanisms make mistakes. For example, last year some Facebook users were trying to commemorate the death of actor Burt Reynolds by circulating a photo of the Hollywood star. However, the company's AI algorithms misread the photo as violating Facebook's community standards, which triggered an automated takedown.
It was only after a user appeal did Facebook permit the photo to re-circulate across the social network, the company's policy director Monika Bickert told journalists in a press call. In other instances, Facebook's automated systems can misread a post as spam because it contains a link suspected to be malicious.
Facebook's goal is to eventually iron out the mistakes and get better at detecting the problematic content with improved AI algorithms. This comes as the company's content moderation policies have been under fire from across the political spectrum for either taking down too much content, or removing too little.
"The system will never be perfect," Bickert said during the press call. But in the meantime, the company is trying to add greater transparency to Facebook's content moderation policies. "We know our system can feel opaque and people should have a way to make us accountable," she added.
You can expect the new content appeals stats to appear in all future enforcement reports. The company is also forming an independent body of experts to help oversee Facebook's content appeals process.
This article originally appeared on PCMag.com.