During the last few years, the term “fake news” has been so popularized and even weaponized by multiple parties that it’s almost too hard to define the term exactly.
However, one thing is clear. Fake news is a massive problem in our modern society.
According to a 2019 survey by Pew Research Center, Americans rate fake news as a larger problem than terrorism, racism, sexism or climate change. Despite multiple definitions by different people, fake news is a big problem and needs to be stopped.
At the University of Waterloo, a team of researchers developed a tool that uses artificial intelligence to do just that.
“The spread of such fake news and disinformation can cause tremendous societal harm, as it can mislead an unprecedented number of people around the world to not only believe in information and conspiracies that could be harmful to themselves (e.g., fake remedies for curing diseases) and others (e.g., fake news about other cultures and events around the world), but also could lead to wrong decisions being made (e.g., voting against a candidate based on false rumors),” said Alexander Wong, associate professor of systems design engineering at the University of Waterloo and a founding member of the Waterloo Artificial Intelligence Institute.
Wong collaborated with three graduate students from the university — Chris Dulhanty, the lead researcher, Jason Deglint and Ibrahim Ben Daya — to develop a tool that uses AI to help social media networks and news organizations weed out fake news.
They presented their paper this month at the Conference on Neural Information Processing Systems in Vancouver, Canada.
Who started it anyway?
President Trump may claim credit for inventing the term “fake news,” but it was actually first popularized by Craig Silverman, a media editor at BuzzFeed News, who started using the term in his studies and reporting since 2014.
In an interview with Westonian, Silverman defined fake news as “completely false content, created to deceive, and with an economic motive.”
Although fake news are outright lies, with the sheer number of fake news in our society today, it makes it harder for people to distinguish fake from real news and, therefore, harder to believe in any news, even the reputable ones.
According to the 2019 Pew survey, nearly 60 percent of Democrats and 70 percent of Republicans have dropped a news outlet because they judged the content to be fake news.
For those who are less politically aware, due to their fear of fake news, they were 20 percent more likely to have reduced their overall consumption of any news than those who are more politically aware.
Moreover, fully half of all respondents said they had avoided talking with someone because they thought that person might bring fake news into the conversation.
With their own biases and perspectives, every person defines fake news differently. And for the sake of weeding out fake news, people become quick to judge and, ultimately, shut down anything that they take to be fake, or often, different from what they already believe in.
This fake news scare creates an echo chamber, a constant feeding of the same thoughts and beliefs, making people more vulnerable to fake news due to lack of legitimate information and leading to an increasingly polarized society.
Flagging fake news
As long as there is gain in doing it, there will always be people who will churn out fake information intentionally.
However, although they can’t stop fake news creators entirely, researchers at the University of Waterloo asked whether they could help journalists who genuinely want to inform the public to better distinguish fake from real news.
According to Wong, the researchers trained their screening system to learn from a large amount of text containing claims and articles. In that way, it keeps learning the linguistics and relationships between the claims and articles and keeps improving over time from the volume of text.
Based on its learning, the system can use what it learned to fact-check and decide whether a claim is supported by different articles.
If a claim is not supported by different articles, especially those from reputable and trustworthy news sources, then it can be flagged as fake news.
“If they are, great, it’s probably a real story. But if most of the other material isn’t supportive, it’s a strong indication you’re dealing with fake news,” Wong said in a news release.
Given multiple claims on the same subject, nine out of 10 times, the system was able to distinguish whether a claim was supported or not, meaning fake or real.
Not substituting, but strengthening
While some might think this system is replacing journalists, the researchers have made clear that their system is a screening tool to strengthen — and not substitute — journalists and fact-checkers at social media and news organizations.
“I firmly believe in the notion of human-machine collaboration, where AI should be designed in a way that helps people do their jobs faster, better and more consistently,” said Wong. “There are things that humans are fantastic at, other things where machines are fast and great at, so why not combine them to be greater than the sum of its parts?
“Therefore, we are motivated to do our part in helping to identify such fake news earlier to help mitigate their widespread belief, or at least bring awareness to such fake news to help people be more informed.”
According to Wong, fake news is a good example of AI bringing positive societal impact. Their system helps flag potentially fake news so that humans can determine with their experience and human context to process and identify fake news faster and better.
“We need to empower journalists to uncover truth and keep us informed,” Dulhanty said in a news release. “This represents one effort in a larger body of work to mitigate the spread of disinformation.”
The researchers are planning on how best to provide this tool to the general public as well, according to Wong. Moreover, they hope to expand their system to cover other languages and other directions to fact-check and identify fake news more comprehensively.