This month, as more platforms embrace community-led content moderation, we ask whether the wisdom of the crowds is an effective cure for online misinformation.
Plus, the US government pollutes the climate debate, news sites fall for a fictitious freelancer, and why Will Smith's crowds aren't as fake as they look.
When TikTok recently launched "Footnotes", it became just the latest social media platform to adopt a user-generated fact-checking model based on X's Community Notes.
Community Notes enlists approved users, rather than paid moderators or fact checkers, to append corrections, or "notes", to misleading posts.
Facebook, Instagram and Threads (all owned by Meta) began rolling out the feature in March, while YouTube launched a "Notes" trial in June.
But does the model's effectiveness justify its newfound popularity?
A central feature of Community Notes is that proposed corrections are only published if they are rated "helpful" by enough other participants. Those people must "come from different perspectives", which platforms estimate based on participants' past behaviour.
X says this requirement for consensus means notes are helpful to the "broadest possible set of people", including those who usually disagree.
Misinformation experts, however, argue it promotes a logical fallacy that the truth lies somewhere between two extremes.
As one fact checker put it: "that is not how facts work: the shape of the Earth doesn't change even if social media users can't find consensus about it".
Data published by X suggests consensus can be hard to achieve.
Only around 8 per cent of notes on X were published in the two years to July 2025, according to The Repost's analysis. A further 4 per cent were rated "not helpful", while more than 87 per cent failed to secure enough ratings either way.
Early testing of Meta's feature suggests many corrections similarly languish in digital limbo.
On X, overall contributions have plummeted by half since the start of 2025, and the share of published notes has been gradually shrinking.
For notes that do get published, the evidence on their effectiveness is mixed.
Studies have found that posts are more likely to be deleted and less likely to be shared after receiving a note, but that corrections can boost the offending account's followers.
Notes can also be motivated by partisanship. Some get published despite being misleading, while accurate corrections can fail to reach the public.
Published claims often arrive after the damage is done. Delays have improved, but in 2025 the average publication time for a note on X is 14 days.
Nadia Jude, a postdoctoral researcher at the University of Edinburgh who specialises in evaluating responses to misinformation, said evidence showed that Community Notes worked "relatively well" at achieving consensus when dealing with mundane and straightforward content, such altered images and out-of-context internet jokes.
"It struggles when the content or narrative is divisive, political and harder to verify" or comes from high-profile accounts, she said, which "calls into question whether Community Notes can be seen to be addressing difficulties of scale, timeliness and legitimacy adequately".
Ms Jude suggested the model could be improved by involving experts such as journalists, fact checkers and professional content moderators, who could help select which posts get checked and ensure these choices consider a post's potential for harm.
Meta has taken a different view, announcing in January that Community Notes would replace its US third-party fact-checking program, which paid independent fact checkers to label misleading posts on its platforms.
QUT researcher Ned Watt, who studies the intersection of AI and fact checking, said it should not be an "either/or" choice between the two approaches, noting that TikTok was deploying Footnotes alongside its existing fact-checking program.
On X, fact checkers were referenced in seven per cent of English notes published since 2021 and the third-most-frequently cited sources across all notes published in 2024.
The uptake of Community Notes has coincided with increased political pressure on fact checkers globally and a broader retreat by digital platforms from protecting users.
Following in the footsteps of X, Meta has rolled back hate speech protections while TikTok has cut its content moderation teams. Google has also announced it will stop highlighting fact-check articles in its search results.
According to Ms Jude, a global rise in authoritarianism had emboldened platforms to pare back trust and safety efforts under the guise of "free speech".
In the US, authorities have pushed platforms to abandon their content moderation efforts since the reelection of President Donald Trump.
The shift to volunteer moderation is also a savvy business decision, Ms Jude said, because it frees up money for other priorities — namely, the AI arms race.
These things aren't always mutually exclusive, with X recently announcing it would "accelerate the speed and scale" of Community Notes by welcoming "AI Note Writers", whose notes will be rated and approved by human users.
Mr Watt said this approach was potentially risky, telling The Repost that AI models "hallucinate sources or whole scenarios, they're open to manipulation by their developers or owners at any time, and they're a far cry from the objective arbiters AI salespeople claim them to be".
He cautioned that such technical fixes were part of an attempt by platforms "to manufacture consensus about what additional information is helpful while at the same time sidestepping accountability for what content, however harmful, circulates on these platforms".
Actor and rapper Will Smith has been accused of manufacturing AI-generated crowds for a video of his recent music tour, but the distorted faces and limbs of fans aren't evidence of wholesale fraud. Fact checkers and deepfake experts say the concertgoers are real. Genuine photos have, however, been animated using AI image-to-video software. The video was likely distorted further by YouTube, which has been quietly experimenting with automatic sharpening and processing of video uploads. Mr Smith's latest brush with AI shows how far the technology has come since his "appearance" in the infamous 2023 spaghetti test.
It seems climate change misinformation has become official US policy, with the government releasing a report on the effects of greenhouse gas emissions which featured more than 100 false or misleading claims. That's according to the news site Carbon Brief, which found nearly 10 per cent of all references cited by the report were also written by the report's authors. Climate scientists have accused the government of handpicking known climate sceptics to produce the "deceptive" report.
At least six news sites have retracted articles found to have been written by a fictitious freelance author, following a report by the UK's Press Gazette. Among the victims of "Margaux Blanchard" were the editors at Wired, who admitted to errors in their process but said the AI-powered writer was convincing enough to clear two separate AI detection tools. It presents a warning to newsrooms to beef up their verification steps for pitches; but of course, you don't need AI to make stuff up.
Last week was Scam Awareness Week, offering an excuse to read about how Spanish fact checkers uncovered a scam network spanning 60 countries and more than 1,000 Facebook pages. The fraudulent pages impersonated public transportation services and lured users with offers of cheap travel cards only to redirect them to phishing sites. Check out Australia's National Anti-Scam Centre to keep up with the latest scams.
Our understanding of past events risks being washed away by a tsunami of shonky AI-generated history videos, says 404 Media. The report follows recent calls to develop new systems for preserving information, with misinformation expert Claire Wardle warning that the era of accessible digital history is coming to an end. Evidence that fact checkers rely on can quickly disappear as websites die, technology evolves or data gets deleted, including by governments. In an unexpected example of this risk, the UK government is urging citizens to "delete old emails and pictures" to reduce demand on its data centres, citing the need to save water on cooling after a prolonged dry spell.
Denmark is pioneering a new approach to combat deepfakes that would grant each of its citizens copyright over their own digital likeness. The proposal, contained in a draft bill, would allow people to demand the takedown of online deepfakes that use their face, voice or body, with fines for non-compliance. It raises interesting questions about what happens to your digital likeness after you die. A growing industry is offering services to reanimate deceased individuals, and just this month a family used the likeness of their child, the victim of a school shooting, to advocate against US gun violence.
UK researchers have released a new game to help boost your defences against online manipulation. Bad Vaxx is an immersive social media simulation which, drawing on the theory of "prebunking", builds resilience to misleading content by exposing people to misinformation tactics before they encounter them in the digital wild.
This newsletter is produced by the RMIT Information Integrity Hub, an initiative to combat harmful misinformation by empowering people to be critical media consumers and producers.