Georgia Tech researchers have discovered a winning strategy to combat vaccine misinformation on X. Their new predictive tool and in-depth analysis reveal that positive, polite and evidence-backed responses are key to successfully disproving false information online.
A groundbreaking analysis from Georgia Tech reveals a winning strategy to combat COVID-19 vaccine misinformation on X, the social media platform formerly known as Twitter. The study highlights that users who respond with positive attitudes, politeness and strong evidence are more likely to persuade others to disbelieve inaccurate information.
This novel approach was identified by researchers across three Georgia Tech schools who have also developed a predictive tool. This tool assesses whether a user’s reply to misinformation will likely change minds or backfire, inadvertently supporting the falsehoods. The tool can even discern well-intentioned replies that might hinder effective social correction.
Drawing parallels with how white blood cells combat viruses, the team noted that social media platforms often see users collectively refuting false information — a phenomenon referred to as social correction. But until now, the success rate of such efforts on most social media sites remained uncertain. With this new study, researchers have provided a clearer understanding of the effectiveness of such efforts on X.
Their method integrates artificial intelligence with an extensive dataset of 1.5 million tweets containing misinformation about the COVID-19 vaccine. By meticulously analyzing user replies and their impacts, the researchers have gleaned valuable insights into successful corrective tactics.
Importantly, the study predates the implementation of X’s community notes feature, a system enabling users to submit corrections to posts platform-wide. The researchers highlighted that this feature restricts user interactions with fact-checking content and may not adequately reflect the platform’s extensive information flow.
This research represents one of the first comprehensive taxonomies of social correction on the X platform. The researchers believe their findings will significantly bolster future fact-checking initiatives. Although the study concentrated on English-language text posts, the framework can be adapted to tackle the escalating threat of misinformation on a global scale.
The study, titled “Corrective or Backfire: Characterizing and Predicting User Response to Social Correction,” was co-authored by doctoral students Bing He and Yingchen (Eric) Ma and their advisers — Mustaque Ahamad, a professor with joint appointments in the School of Cybersecurity and Privacy and the School of Computer Science, and Srijan Kumar, an assistant professor in the School of Computational Science and Engineering.