Researchers Identify Key Challenges in Detecting Violent Speech Targeting Asian Communities

A groundbreaking study by Georgia Tech and the Anti-Defamation League has revealed significant challenges in detecting violent speech aimed at Asian communities online, underscoring the need for more advanced algorithms and community-focused approaches.

In a pivotal study conducted by researchers at Georgia Tech and the Anti-Defamation League (ADL), significant gaps have been unveiled in digital platforms’ abilities to detect violence-provoking speech targeting Asian communities. The findings call for urgent improvements in the technology used by social media and internet moderators.

The research revealed that current algorithms often fail to distinguish violence-provoking speech from general hate speech, a critical gap that allows harmful rhetoric to persist and potentially lead to real-world violence.

“The COVID-19 pandemic brought attention to how dangerous violence-provoking speech can be. There was a clear increase in reports of anti-Asian violence and hate crimes,” Gaurav Verma, a Georgia Tech doctoral candidate who led the study, said in a news release. “Such speech is often amplified on social platforms, which in turn fuels anti-Asian sentiments and attacks.”

The study shows that while humans can differentiate violence-provoking speech from other forms of hateful content, automated systems struggle with subtle language cues. The team tested five different natural language processing (NLP) models, uncovering that while the models scored 0.89 in detecting hate speech, they only managed a 0.69 score for identifying violence-provoking speech.

This performance gap underscores the urgency of refining detection methods. Left unchecked, such speech can escalate, leading to real-world repercussions. The prevalence of anti-Asian rhetoric, particularly during the COVID-19 pandemic, fueled an alarming 339% increase in hate crimes against Asian Americans in 2021. This statistic underscores the real-world impact of digital hate speech.

“We believe that we cannot tackle a problem that affects a community without involving people who are directly impacted,” Jiawei Zhou, a doctoral student specializing in human-centered computing at Georgia Tech, said in the news release.

He emphasized the importance of incorporating community insights into research methodologies to address the nuances of violence-provoking speech effectively.

The study’s community-centric approach involved creating a specialized codebook and crowdsourcing data from 120 Asian community members, who labeled 1,000 posts from X (formerly Twitter). The collaborative process not only informed the research but also fostered more accurate data categorization.

“One of the major challenges in studying violence-provoking content online is effective data collection and funneling down because most platforms actively moderate and remove overtly hateful and violent material,” added Rynaa Grover, a Georgia Tech alumnus and final-year graduate student in computer science.

Faculty mentors Srijan Kumar and Munmun De Choudhury guided the research. Both are prominent figures in computational science and human-computer interaction, respectively, with extensive backgrounds in online safety and mental health research.

The collaboration with ADL researchers Binny Mathew and Jordan Kraemer further enriched the study, ensuring it was anchored in practical, real-world concerns of combating hate and extremism.

The team’s findings were presented at the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) in Bangkok, Thailand.