It’s no surprise that the twenty-first century is called the digital era. Because cyber culture, particularly social media, has surpassed all prior forms of exchanging ideas and communicating, social media allows users to connect quickly and receive more information. With anything, however, there are negatives. Social media has become a venue for normalizing hate speech.
Since 2010, I’ve been active on social media and have encountered my share of online abuse and internet “trolls.” A troll is typically an anonymous user who makes fake and offensive rem
arks about another person for the sole purpose of eliciting negative responses. However, hate speech is the use of harsh or threatening language or writing to inculcate prejudice toward a certain group, most often marginalized groups.
Much of the world now communicates via social media, and social media giants such as Meta, the company formerly known as Facebook, have come under fire in recent years because of widespread online hatred. If you follow the news, you’ve probably already heard of Frances Haugen, who came forward as a whistleblower in early October 2021 and testified before the United States Senate Commerce Committee about Meta’s private and internalized research, demonstrating that it knowingly amplifies hate and abuses Section 230 immunity.
Social media is a relatively young phenomenon and many people are still discovering the extent to which it has impacted our lives. Regardless of whether it is bad, positive or neutral, social media does influence our society, which is why the concept of hate speech becomes more complicated when applied to social media platforms. Hate speech on the internet is also complicated by Section 230 of the Communications Decency Act. Section 230 absolves online platforms of responsibility for the content their users contribute while providing them power over that content without requiring them to be considered publishers. This means that corporations that facilitate online communication, such as Facebook and Twitter, can now classify hate speech according to their own standards, which makes it appear as though these platforms can act as their own governments.
Unfortunately, it appears as though those prone to racism, misogyny and homophobia have carved out niches on social media sites to support their ideas. Additionally, it provides a forum for violent individuals to broadcast their actions, such as Dylann Roof, the 2015 South Carolina shooter. According to prosecutors, Roof “self-radicalized” online prior to murdering nine people at a Black church. As a result, Meta has a reputation for permitting hate groups and terrorist organizations, such as al-Qaeda, to operate on its platform. Not only is this detrimental to the groups of people targeted, but it’s distressing to know that this type of content is still being published on such sites.
There is much more to be learned about human rights and social media. The Communications Decency Act of 1996, Section 230, was enacted in 1996. It is past time to rethink and reform social media sites’ “safe harbor” provisions. The most recent effort is a bill called the Justice Against Malicious Algorithms Act, which would repeal Section 230 immunity if an online platform uses an algorithm to recommend content to a user based on that user’s personal information and if it results in physical or severe emotional injury.