After President Biden won the election nearly three years ago, three in 10 Americans believed the false narrative that his victory was the result of fraud, according to a survey. In the years since, fact-checkers have debunked the claim in extensive articles, corrections posted to viral content, videos, and chat rooms.
This summer they received a verdict on their efforts in an updated survey from Monmouth University: Very little has changed. Three in 10 Americans still believed the false narrative.
With a wave of elections scheduled for next year in dozens of countries, the global fact-checking community is taking stock of its efforts over a busy few years, and many don’t love what they see.
The number of fact-checking operations at news organizations and elsewhere has plateaued, and perhaps even declined, after skyrocketing in response to a surge in unsubstantiated claims about the election and the pandemic. Social media companies that once touted efforts to combat misinformation are showing signs of waning interest. And those who write about falsehoods around the world face increasing harassment and personal threats.
“The situation is not improving,” said Tai Nalon, a journalist who runs Aos Fatos, a Brazilian fact-checking and disinformation monitoring company.
Elections are scheduled for next year in more than 5,500 municipalities across Brazil, which will be monitored by a few dozen Aos Fatos fact-checkers. The idea exhausts Nalon, who has spent the last few years dealing with a president who traffics in misinformation, bizarre theories about the pandemic and an increasingly polluted online ecosystem rife with harassment, distrust and legal threats.
Fatos’ organization, one of the leading operations of its kind in Brazil, began in 2015 as attention increased to combating false and misleading content online. It was part of a fact-checking industry that flourished around the world. At the end of last year, there were 424 fact-checking websites, up from just 11 in 2008, according to an annual census by the Duke University Reporting Lab.
Organizations used an arsenal of tools new and old: fact checks, pre-literates that attempted to inform viewers about misinformation before they encountered it, context labels, accuracy indicators, warning screens, content removal policies , media literacy training and more. Meta-owned Facebook helped drive some of the growth in 2016 when it began working with and pay fact-checking operations. Online platforms, such as TikTok, eventually followed suit.
However, momentum appears to be stalling. This year, only 417 sites are active. The addition of new sites has slowed for several years, with just 20 last year compared to 83 in 2019. Sites like Baloney Meter in Canada and Fakt Ist Fakt in Austria have gone silent in recent years.
“The stabilization represents a kind of maturity of the field,” said Angie Drobnic Holan, director of the International Fact-Checking Network, which the nonprofit Poynter Institute started in 2015 to support fact-checkers in everyone.
The work continues to attract the interest of new parts of the world, and some think tanks and good governance groups have begun offering their own fact-checking services, experts said. However, harassment and government repression remain important deterrents. Political polarization has made fact-checking and other defenses against misinformation a target among right-wing influencers, who say those who debunk them are biased against them.
Yasmin Green, executive director of Jigsaw, a group within Google that studies threats like misinformation and extremism, recalled a study in which a participant skimmed a fact check shared by a CNN journalist and dismissed it outright. “Well, who checks the data for the fact-checkers?” the user asked.
“We’re in this very distrustful environment where you’re evaluated solely on the basis of the speaker and you’re distrustful of people whose judgment you decide is not trustworthy,” Ms. Green said.
According to researchers, intervening against misinformation has a broadly positive effect. experiments made in 2020 concluded that fact checks in many parts of the world reduced false beliefs for at least two weeks. A Stanford team determined that education about misinformation after the 2016 election had likely contributed to fewer Americans visiting non-credible websites in 2020.
Successhowever, it is inconsistent and it depends on many variables: the location of the viewer, their age, their political leanings and their level of digital engagement, and whether a fact check is written or illustrated, succinct or explanatory. Many efforts never reach crucial demographic groups, while others are ignored or resisted.
After falsehoods invaded Facebook during the pandemic, the platform instituted policies against Covid-19 misinformation. However, some researchers questioned the effectiveness of the efforts in a study published this month in the journal Science Advances. They determined that while the amount of anti-vaccine content had decreased, engagement with the remaining anti-vaccine content had not.
“In other words, users interacted with the anti-vaccine content as much as they would have if the content had not been removed,” said David Broniatowski, a professor at George Washington University and an author of the paper.
The researchers found that the remaining anti-vaccine content was more likely to be misleading, with users linking to less trustworthy sources than before Facebook implemented its policies.
“Our integrity efforts continue to lead the industry and we are focused on addressing industry-wide challenges,” Meta spokesperson Corey Chambliss said in an emailed statement. “Any suggestion to the contrary is false.”
In the first six months of this year, more than 40 million posts on Facebook received a fact-checking label, according to a report that the company presented to the European Commission.
Social platforms where false narratives and conspiracy theories are still widely spread have reduced resources against misinformation over the past year. The researchers found that fact-checking organizations and similar media gradually became more dependent on social media companies for a financial lifeline; Misinformation watchers are now concerned that increasingly budget-conscious tech companies will begin to reduce their philanthropic spending.
If Meta ever cuts the budget for its third-party fact-checking program, it could “decimate an entire industry” of fact-checkers who rely on its financial support, said Roth, now a visiting scholar at the University of Pennsylvania. (Meta said his commitment to the program had not changed.)
X has undergone some of the most significant changes of any platform. Its billionaire owner for less than a year, Elon Musk, adopted an experiment that relied on its own unpaid users rather than paid fact-checkers and security teams. The Expanded Fact Check Program: Community Notes — allows anyone to write corrections to posts. Users may consider a note”useful” so that it is visible to everyone; Some notes have appeared along with the content of Mr. Musk and president biden and even a viral post about a Groundhog falsely accused of stealing vegetables.
X did not respond to a request for comment. tech regulators were concerned this week about the quality of content on X after The information reported that the platform was cutting half of the team dedicated to managing misinformation about electoral integrity; the company had said less than a month before that he planned to expand the team.
Crowdsourced fact-checking has shown mixed results in research, said Valerie Wirtschafter, a fellow at the Brookings Institution. An article she co-wrote in The Online Trust and Safety Magazine found that the presence of a community note did not prevent publications from being widely disseminated. Users who created misleading posts saw no change in engagement on subsequent posts, suggesting they did not pay any penalty for sharing falsehoods.
Since the most popular posts on that Mr. Musk’s arrival last year.
“I’ve never found a way to keep humans in the loop,” he said in an interview. “My belief, and everything I’ve seen, is that Community Notes alone is not a sufficient replacement.”
Proponents of false narratives and conspiracy theories are also wrestling with another complication: artificial intelligence.
The reality-warping capabilities of technology, which still manage to confuse many of the tools designed to identify its use, are already keeping fact-checkers busy. Last week, TikTok said it would test a ai-generated-content” title=”” rel=”noopener noreferrer” target=”_blank”>”ai generated” labelautomatically adding it to content detected as edited or created with the technology.
Testing is also underway using artificial intelligence to quickly analyze the huge volume of false information, identify common spreaders, and respond to inaccuracies. technology, however, has a shaky track record with the truth. After fact-checking organization PolitiFact ChatGPT tested In 40 claims that had already been meticulously investigated by human fact-checkers, the ai made a mistake, refused to answer, or came to a different conclusion than the fact-checkers half of the time.
Between new technologies, fluctuating policies and stressed regulators, the online information ecosystem is at its peak. messy teenage years “He’s gangly, has acne and is in a bad mood,” said Claire Wardle, co-director of the Information Futures Lab at Brown University.
However, he is hopeful that society will learn to adapt and that most people will continue to value precision. Disinformation during the 2022 midterm elections was less toxic than feared, thanks in part to media literacy efforts and training that helped authorities respond much more quickly and aggressively to rumors, he said.
“We tend to obsess over the worst conspiracies: the people who became radicalized,” he said. “Actually, most of the public is pretty good at understanding all this.”
Audio produced by Adriana Hurst.