Facebook has announced that it labelled 180 million pieces of misinformation related to the US election on its platform. As announced in a company blog post, Facebook provided the numbers alongside its most recent transparency report.
Facebook has long had issues in dealing with misinformation on a range of topics. Until recently it seemed that the company placed the value of free speech much higher than curbing the spread of misinformation.
This has led to incoming President Joe Biden to accuse the social media company of regression on the topic. He feels social media platforms are not doing enough to curb the spread of misinformation.
It also emerged that an ex-Facebook employee believed the company was unable to tackle misinformation. They argued the company often ignored or took too long to act on evidence. As a result, this allowed misinformation to run wild on the platform.
Facebook has tried to hit back on these claims in this report. Facebook’s VP of Integrity Guy Rosen shared a range of stats with reporters to illustrate the company's efforts in this area.
Facebook outlines its efforts to tackle misinformation
In this call Rosen pointed out that on top of the 180 million labels Facebook added, they also removed 265,000 pieces of content for breaking the company’s rules against voter interference.
Rosen also went on to note that 95 per cent of Facebook users do not click through the warning labels to view posts that are labelled for misinformation. This is important as it shows that the labels are actually having an effect.
These stats cover from March to election day. As a result, this means they do not offer any insight into Facebook's post-election efforts to tackle misinformation.
Other interesting bits of information shared by Rosen include removing 12 million pieces of content for sharing dangerous misinformation about the coronavirus between March and October. The company also labelled 167 million posts on the subject during that period.
Facebook provides context for its actions
Facebook has also removed over 22 million pieces of content for hate speech during the third quarter of 2020. This number falls roughly in line that of the previous quarter.
The company has also provided some context for these number outlining that its prevalence was 0.10% – 0.11%. This means that there are 10-11 pieces of hate speech for every 10,000 views of content.
The company did not comment on the advertiser boycott that occurred this summer. It appears that the aforementioned prevalence statistic was designed to rebut any claims that hate speech was rampant on the platform.
Rosen did talk about the company's innovations in AI Technology. He claims this is one of the main reasons for their increased ability to detect hate speech before it is reported.
However, some believe that Facebook is still some way off having an effective system. Recently a group of moderators wrote an open letter to the company. It alleged that the company’s AI-based tools were "years away" from genuine efficacy.