Vicarious Offense and Noise Audit of Offensive Speech Classifiers

Feb 1, 2023·
Tharindu Cyril Weerasooriya
,
Sujan Dutta
,
Tharindu Ranasinghe
,
Marcos Zampieri
,
Christopher M. Homan
,
Ashiqur R. KhudaBukhsh
· 1 min read
URL
Abstract
This paper examines social web content moderation from two key perspectives: automated methods (machine moderators) and human evaluators (human moderators). We conduct a noise audit at an unprecedented scale using nine machine moderators trained on well-known offensive speech data sets evaluated on a corpus sampled from 92 million YouTube comments discussing a multitude of issues relevant to US politics. We introduce a first-of-its-kind data set of vicarious offense. We ask annotators: (1) if they find a given social media post offensive; and (2) how offensive annotators sharing different political beliefs would find the same content. Our experiments with machine moderators reveal that moderation outcomes wildly vary across different machine moderators. Our experiments with human moderators suggest that (1) political leanings considerably affect first-person offense perspective; (2) Republicans are the worst predictors of vicarious offense; (3) predicting vicarious offense for the Republicans is most challenging than predicting vicarious offense for the Independents and the Democrats; and (4) disagreement across political identity groups considerably increases when sensitive issues such as reproductive rights or gun control/rights are discussed. Both experiments suggest that offense, is indeed, highly subjective and raise important questions concerning content moderation practices.
Type
Publication
arXiv

We ran a massive experiment: 9 different AI content moderation systems analyzed 92 million YouTube comments about US politics. The results were shocking—different AI systems flagged wildly different content as offensive, with almost no consistency. When we asked humans to label the same content, political identity was a huge factor: Democrats and Republicans disagreed strongly on what’s offensive, especially on hot topics like abortion and gun rights. This proves that “offensiveness” isn’t a fact that AI can learn—it’s a subjective judgment shaped by values. Current content moderation practices that treat one group’s perspective as “truth” are fundamentally unfair.