SciTech

Why does YouTube get less criticism for hate speech?

Hate and hate speech are on the rise in America. As last week’s shooting at the Tree of Life Synagogue sadly proved, the U.S. has a darker side.

Platforms like Facebook and Twitter have again come under public scrutiny for their role in perpetuating hate speech by allowing fascist, neo-Nazi, and white supremacist organizations to publish and share content. This has long been a problem with social media and online forums, as unrestricted access combined with lax enforcement of regulatory guidelines often leads to a concentration of toxic individuals, who negatively impact the user experience for everyone else. Just think: how often do you scroll through Facebook comments on a controversial post without coming across something racist or hateful?

Twitter and Facebook are used to this critical attention, and while it’s true that they act as vectors for the spread of hate, there is one entity that hasn’t been shouldering its fair share of the blame: Google. Or more specifically, Google’s video sharing platform, YouTube.

Much like Twitter and Facebook, YouTube’s focus on user-created content inevitably helps the spread of hate speech and bigotry, as members of the alt-right create videos espousing their views. These include such infamous and controversial figures as Alex Jones, founder of Infowars, and Sargon of Akkad, who makes videos countering progressive stances on issues such as LGBTQ rights and feminism.

An analysis of fascist activist chat servers published by Bellingcat found that the majority of respondents credited YouTube with having “red-pilled” them, a term originating from The Matrix and now used to refer to the “awakening” of fascist and anti-Semitic beliefs. These include the beliefs that the Jewish population is at the center of some global conspiracy (commonly called the Jewish Question, or JQ), or that the Holocaust isn’t real.

If YouTube is one of the biggest contributors to the problem, why don’t we hear about it more often?

It could be because YouTube’s algorithms act as a double-edged sword, only suggesting relevant videos to its users: those who don’t affiliate with the alt-right don’t see their videos, while those who do see proportionally more of them. While this shields the majority from controversial content, it also creates an echo-chamber for the minority, concentrating the evil that’s already there. This is different from Twitter and Facebook, where there’s a sense of immediacy in that the comments and tweets are much more in-your-face and harder to avoid.

YouTube has also taken steps towards cracking down on extremist content, albeit through a very heavy-handed approach that involves allowing algorithms to demonetize any video that touches on sensitive social issues or tragedy and conflict, as reported by The Intelligencer. While this helped curb some of the hate, it also affected many prominent YouTube stars and even motivated a shooting at YouTube's headquarters on April 3, 2018, as reported by The New York Times.

The situation raises an uncomfortable question: what should tech companies do?

After any attempt to crack down on hate speech, those who feel they are being silenced lash out and claim that these platforms are infringing upon their First Amendment right to free speech. This has long been a contentious topic, even if the argument isn’t entirely correct: The First Amendment ensures that the government cannot deprive you of your right to free speech, but it doesn’t say anything about private entities like Facebook and Google. Businesses reserve the right to do business with whomever they choose,a point that’s often overlooked.

However, with the great power of censorship and control comes the equally great responsibility to exercise that power judiciously and impartially. After all, there can be a thin line between crass internet humor and actively discriminatory content, and although we as humans have a vague idea about which is which, there’s no objective way to differentiate between the two. As such, it’s very hard to create an algorithm that susses out the good from the bad, especially when computers can’t understand abstract ideas like hate and bigotry.

There has been some backlash against YouTube’s blunt implementation of censorship, with organizations like Prager University claiming that YouTube has restricted some of their more controversial videos, especially ones discussing things like gender equality and racism, as reported by Ars Technica. Twitter and Facebook, on the other hand, are hesitant about taking an editorial approach towards their content, focused more on building their product than managing its consequences. As Ev Williams, cofounder of Twitter and CEO of Medium, sums it up in an interview with CNN, “...you get into an area where most tech companies would be like, ‘it’s not something that really fits in our model or that we would even be good at.’” But while these companies have tried to avoid taking a proactive role, as the demand for stricter regulations rises, so too does the pressure to change.

No matter how you parse it, tech giants are becoming tied to our divisive politics, since they power the digital forums in which we discuss them. These companies are faced with the dual task of moderating speech while steering away from excessive censorship. Striking that balance is where the real challenge lies: the code that software engineers write might not have inherent ethical values, but the people who create and consume the product sure do, and how these companies choose to uphold their values will have untold consequences for generations to come.