Nearly a year after it was revealed that Facebook was going soft on white supremacists who use its platform to spread hate, and just weeks after a man with white supremacist views killed 50 people at mosques in New Zealand, the social media giant is coming around.
Facebook announced last week that it would extend its ban on hate speech to include the promotion and support of white nationalism and white separatism. People who search for such content will be redirected to Life After Hate, a group that helps people leave the violent far-right. The ban applies to Facebook-owned Instagram as well.
The company had banned explicit white supremacist content – in which users proclaim the superiority of the white race – following a murder by a white supremacist two years ago in Charlottesville, Virginia. Content spreading white nationalism – the idea that the country should always be white majority and led by whites – and separatism – meaning whites should live together and apart from other races – was not scrubbed at the time because the company said it would be too difficult to differentiate them from legitimate content.
That never made sense, except in that it allowed Facebook an easier way out. Civil rights groups have rightly argued that white nationalism and separatism are inseparable from white supremacy, and that they are distinct from other, legitimate separatist movements.
Finding and deleting this material among the hundreds of millions of daily posts is difficult. Automated systems don’t catch a lot of content, and moderators are not given a lot of time to decide what should stay and what should go.
But Facebook has done it before. The company has focused on ridding the platform of content related to the Islamic State group and al-Qaida.
However, just like law enforcement and domestic counterterrorism before them, Facebook ignored the rising tide of hate from white supremacist groups, who now represent a threat in this country as least as large as Islamic extremism.
Facebook is at least now catching up: With the announcement, the company becomes the first social media company to ban such content. The ban is a welcome recognition of the fact that online racism is connected directly to organized hate – through messaging, recruiting and more – and has led to real-world violence.
There is more to do. For now, the ban will only apply to explicit praise, support or representation. However, white supremacist groups have become skilled at spreading their hate through implicit and coded messages. If Facebook is serious about not allowing its platform to be used for dark purposes, it will ferret out implicit messages as well; it’s just a matter of putting the company’s ample resources in the right place.
And the fact that the New Zealand gunman livestreamed the shooting on Facebook, and Facebook and other social media sites struggled to stop the video from going viral in the days after the shooting, shows these companies that wield so much influence have a ways to go in rooting out hate.
But Facebook’s ban is a good step. Others should follow, and we should keep pushing Facebook to do the right thing.
Send questions/comments to the editors.