Fact check: Are X's community notes fueling misinformation?

World Tuesday 05/August/2025 14:40 PM
By: DW
Fact check: Are X's community notes fueling misinformation?

New York: On July 9, the US government sanctioned United Nations Human Rights Council special rapporteur Francesca Albanese for what the US Secretary of State Marco Rubio said was a "campaign of political and economic warfare against the United States." 

Albanese has consistently denounced Israel's actions in Gaza since its offensive against the Palestinian group Hamas began in October 2023, as well as the Trump administration's efforts to suppress dissenting voices critical of Israel.

The announcement was rejected by the UN, which called for a reversal of the sanctions, and it also prompted a debate online, where Albanese's name began to trend on X (formerly Twitter). 

Posts poured in both defending and criticizing her work, accompanied in several cases by "Community Notes," X's signature tool to fight misinformation. The notes, which are essentially brief clarifications or extra context attached to posts, can be submitted by anyone.

X claims it uses what it calls a "bridging algorithm" to prevent bias, lending more weight to upvotes from users with historically different viewpoints and thus theoretically reducing the chance that a single group can dominate the narrative.

But that doesn't make them immune from error. In the case of Albanese, for instance, one community note claimed that "Francesca Albanese is not a lawyer," amplifying arguments by her critics about her qualifications and "ethical conduct."

While Albanese did admit in an interview with Vanity Fair that she didn't take the bar exam, which would have qualified her as a practising attorney, she did study law. Her official profile on the website

of the UN Office of the High Commissioner for Human Rights (OHCHR) describes her as an "international lawyer" who has authored publications on International Law. 

What this example shows is that while community notes can be a valuable tool to reduce the spread of disinformation, they are not always accurate and often fail to paint the whole picture.  

Notes are meant to be a system where users collaboratively add context and verify facts. Research from Cornell University

 has shown that notes on inaccurate posts on X help to reduce reposts and increase the likelihood that the original author deletes the post. 

However, according to an analysis of X data by NBC News, the number of community notes being published are declining in number, and  DW Fact check spotted several examples of the tool misleading users instead of helping them spot falsehoods.

Misleading community notes slipping through 
In July 2025, a post by Sky News quoting the United Kingdom's Metropolitan Police chief went viral, accumulating over 4.7 million views. The post linked to a Sky News article based on an interview with the police chief, which highlighted structural inequality, noting it was "shameful" that black boys in London were statistically more likely to die young than white boys. 

The community note was then added; however, it was reframed, stating: "The headline lacks the essential context that despite making up only 13% of London's total population, Black Londoners account for 45% of London's knife murder victims, 61% of knife murder perpetrators, and 53% of knife crime perpetrators."

While factually correct, the note introduced unrelated crime statistics from 2022 — subtly shifting the focus from systemic inequality to framing black boys as perpetrators of crime. Instead of clarifying the issue, the note distorted the original message, misleading users who hadn't actually clicked on the link in the post. 

Community notes and elections  
Another problem was spotted by experts during the 2024 US Presidential elections. 

Researchers Alexios Mantzarlis and Alex Mahadevan from the Florida-based Poynter Institute analysed community notes posted on Election Day. Their goal was to assess whether community notes were helping counter election misinformation or not.

Their findings raised concerns. Out of all fact-checkable posts analyzed, only 29% carried a community note rated as "helpful." In X's system, a note is rated "helpful" when it is upvoted by a diverse group of contributors and prioritised for public display. 

But of these "helpful" notes, only 67% actually addressed content that was fact-checkable. In other words, nearly a third of the notes that appeared as helpful were attached to posts that didn't contain factual claims at all. 

The researchers saw this as a problem of low precision and recall: too few misleading posts were getting corrected, and even when notes appeared, many weren't targeting actual misinformation.  

As Poynter noted, "This is not the kind of precision and recall figures that typically get a product shipped at a Big Tech platform." 

Meanwhile, Germany's Alexander von Humboldt Institut für Internet und Gesellschaft, a research institute based in Berlin analyzed nearly 9,000 community notes in the run-up to the country's federal elections in February this year, and found that "community notes follow political patterns."  

The institute said, "Users who write notes are not free of political views. Their assessments and comments may therefore be influenced by their own interests or ideological biases."  

Poynter's Mahadevan explained in an interview with DW's fact-checking team how people may be gaming the system: when someone new joins Community Notes, X assumes they're unbiased because they haven't rated many notes yet.

"Bad actors and troll farms have figured out you can flood the system with new accounts to upvote certain viewpoints and get those notes published," says Mahadevan.