We all have seen some very ugly scenes from Charlottesville last week when left and right wing hate groups protested in front of each other that resulted in fights, injuries, arrests and even death of a protestor.
Not only in the United States, but the number of incidents that involve hate groups and hate speech are increasing throughout the world. Many experts claim that one of the major reasons behind the increase in such incidents is social media.
Why blame social media? – Because it becomes the most favorite platform for such groups to not only spread hate speech and propaganda, but also to organize, recruit, and plan events like Charlottesville. According to a recent study, hate groups like KKK and extremist right and left wing groups exploded on social media in last two to three years.
The study – Hate on Social Media: A Look at Hate Groups and Their Twitter Presence – also claimed that the hate groups on social media increased by 3% in last year, with the main focus on anti-immigration sentiment. The alarming thing reported in the study was the support these groups enjoy on social media as the number of likes and shares of their comments in 2016 increased more than ten-times if compared to 2008.
In 2008, the average number of likes per post by the hate groups was only 0.11, that was increased to 7.68 in 2016.
Some of the major social media platforms that are currently used by hate groups are Facebook, Twitter, and YouTube. Other than that, many organized groups use message boards and online forums.
What Social Media Giants Are Doing to Tackle the Problem?
The question here is – Are these platforms taking it as a serious issue? As there is a very thin line between hate speech and freedom of speech, many networks are confused about how to fix the issue.
On the other hand, after the increased number of such incidents where hate speech turned into violent actions and later blamed on social media, many networks also feel the pressure to take the necessary steps against hate speech on their platforms.
Germany is drafting a law that if approved, will result in fines on Twitter and Facebook worth $55 million over the hateful content on both platforms.
Similarly, EU is finalizing a proposal that will force YouTube, Facebook, and Twitter to delete and block hate speech videos. A report by EU Justice Commissioner compiled in 2016 claimed that YouTube was the fastest to respond to reported hateful content while Twitter was the slowest.
YouTube is also taking the issue more seriously by implementing stricter laws against content that contain hate speech. YouTube also has a plan to place videos in a limited state that are flagged by users if they contain controversial supremacist and religious content.
The limited state is a version where the video remains on YouTube but it is not recommended and has no key features like likes, suggested videos and comments.
On the other hand, Twitter received the most criticism for showing indifference to the hateful speech posted on the platform. A week ago, one activist even painted hate speech tweets on the road in front of the Twitter’s Hamburg HQ in protest against the lack of action by Twitter.
Even though Twitter lists hateful conduct that includes hate speech among offenses that can result in a ban, but most users complained about slow or lack of response from the company on reported content.
Similarly, Snapchat also has community guidelines that prohibit users from spreading any content that falls under the category of hate speech. Tumblr also has similar community guidelines.
After the incidents at Charlottesville, Facebook was quick to remove pages created and used by white supremacists, some of which were “White Nationalists United” and “Right Wing Death Squad”.
With stricter laws against hate speech in Europe and proposed fines on social media networks for hateful content in countries like Germany, it looks like social media giants are now taking steps to counter the problem. But there is still a long way to go, with hundreds of thousands of comments, posts, videos, and tweets posted on these networks every day, it is often impossible to take swift action.
And with a strong support for free speech as well by many segments of the society, it is also not easy for these networks to distinguish between hateful removable speech and content that is allowed under freedom of speech.