Caitlin Ring Carlson, PhD
Faculty Fellow and Associate Professor of Communication and Media
When it comes to regulating potentially harmful content on their sites, social media companies are stuck in a difficult position. Legally, they are not required to follow First Amendment jurisprudence in their content moderation efforts. Instead, they're free to establish whatever community standards they feel are most appropriate for their platform. Social media companies then use a combination of artificial intelligence and human content moderators to enforce their established rules. Unfortunately, many of these companies have been slow to utilize existing technology to remove harmful content such as hate speech. This inaction has resulted in a range of harms, from cyberbullying to ethnic cleansing campaigns. In Myanmar, for example, military personnel used Facebook to target the Rohingya, resulting in the exodus of over 700,000 people from that country since 2017. Facebook has been widely criticized for its failure to remove the offensive posts, many of which compared the Rohingya to dogs and called for the destruction of their race. The U.N. criticized the company for its role in facilitating ethnic violence.
While a substantial amount of research exists about the ethics of the content moderation process (Roberts, 2019; Gillespie, 2018; Klonick, 2018) and extensive work has been done to explore the possible ethical frameworks that should guide social media companies (Johnson, 2017; Carlson & Cousineau, 2020; Santa Clara Principles, 2018), far less has been said about how to convince social media companies to take seriously the implications of their actions or in this case, inaction. Carlson's project seeks to reframe the need for aggressive content moderation of hate speech on social media as a critical component of corporate social responsibility for social media organizations.
Explore more about her recent book about hate speech:
Carlson, C.R. (2021). Hate Speech. MIT Press.
Carlson, C.R. (2020). Hate speech as a structural phenomenon. First Amendment Studies.
Carlson, C.R. & Witt, H. (2020). Online harassment of U.S. women journalists and its impact on press freedom. First Monday, 25(11).
Carlson, C.R. & Cousineau, L. (2020). Are you sure you want to view this community? Exploring the ethics of Reddit’s quarantine practice. Journal of Media Ethics.
Carlson, C.R. (2020). Exploring legal responses to hate speech in the United States. Journal of Media Law & Ethics, 8(1), 32-54.
Carlson, C.R. & Rousselle, H. (2020). Report and repeat: Investigating Facebook’s hate speech removal process. First Monday, 25(2).