The Repost #3: July 2025

The Repost #3: July 2025

This month, we unpack concerns over Russia's new fact-checking strategy and round up some of the biggest furphies sparked by the Israel-Iran conflict.

We also look at how AI is being used to manipulate music-streaming platforms and how Australia's "freedom" movement featured in the recent Los Angeles protests.

Russian network takes aim at fact checking

A Russian flag superimposed on a keyboard

Russia has launched its own self-described fact-checking association to counter what it calls the West's "relentless stream of fake stories" and "biased pseudo-fact-checking", in a move that experts say aims to sow confusion and spread disinformation.

Launched in April, the Global Fact-Checking Network (GFCN) mimics genuine fact-checking bodies such as the International Fact-Checking Network (IFCN) and European Fact-Checking Standards Network (EFCSN), which collectively represent more than 165 fact-checking organisations across six continents. (RMIT's previous fact-checking outfits have been members of the IFCN).

But does Russia's latest venture stack up?

Despite sharing a similar name, the GFCN differs from its European and international competitors in several important ways. For one thing, it does not explicitly limit membership to editorially independent organisations that have already been publishing impartial fact checks. Journalists, NGOs, bloggers and "opinion leaders" are all welcome to join.

And while the network's Code of Responsible Fact-Checking requires that members "strive to be objective" in their work, it does not demand that they apply consistent methods to all fact checks, name their editorial staff or declare their funding sources, for example.

The code also appears to promote "collegiality" above editorial independence, stating that when a conflict of interest or "disagreement in the estimates" arises, members are expected to "jointly decide on the reliability of data" with the involvement of GFCN management.

Members must submit themselves to random compliance checks, though the code offers little detail on how these assessments will be undertaken or by whom.

Questions of independence and impartiality

The apparent lack of editorial safeguards is troubling given Russia's poor record on press freedoms and the history of many players involved in the GFCN.

The network's founders include Russian state-owned news agency TASS Media and the ANO Dialog, a non-profit established by the Moscow city government.

TASS media has been suspended by the European Alliance of News Agencies for "not being able to provide unbiased news", while the ANO Dialog is subject to US, EU and UK sanctions for its involvement in Russia's "Doppelganger" disinformation campaign.

At the time of writing, 49 people and organisations are listed as having joined the GFCN.

They include "International Reporters", a website which, according to Reporters without Borders (RSF), has received funding from the ANO Dialog and Russia's Ministry of Digital Development and uses "foreign propagandists" to spread disinformation in support of Russian foreign policy.

Several GFCN members have also published stories for the state-funded or -directed Russian media organisations RT and Sputnik News, both of which have been classed by the US State Department as "disinformation and propaganda outlets".

Meanwhile, IFCN-accredited fact checkers Facta and Maldita have detailed their concerns about many GFCN participants, including some who have suggested the killing of civilians in Bucha was staged by "Ukrainian Nazis" or have promoted repeatedly debunked disinformation narratives about the supposed spending sprees of Ukrainian President Volodymyr Zelenskyy's wife.

Citing the suppression of independent journalism in Russia, IFCN director Angie Drobnic Holan recently told German news outlet DW, "We do not consider [the GFCN's] activities to fall within the professional fact-checking ecosystem."

GFCN representatives have denied the network has any state affiliation, telling RSF that "none of the members of our organisation represent specific states" and that its work is done out of a "love for the truth".

What is the network publishing?

At its launch, the GFCN website was billed as "an international portal that fosters an honest and open approach to fact-checking, aggregating relevant investigations and refutations".

So far, however, much of its content would be better described as opinion or commentary. Recent headlines include, for example, "Neoliberals and the LGBT flag are taking Moldova by storm" and "I don't see any effort to combat false information in the EU. On the contrary, it is at war with the truth".

Some of the few fact checks published to the site have drawn criticism from Facta (which the GFCN rejects) and DW for their inclusion of questionable evidence.

Confusion through mimicry

Maksim Markelov, a Russian disinformation expert at the University of Manchester, said the GFCN appeared to be applying a tried and tested method of using mimicry to advance Russian goals.

"By imitating democratic practices such as fact checking, pro-Kremlin debunking content may aim to reduce the perceived credibility of the news it critiques, deflect blame for Russia's aggressive actions, discredit its critics, and reinforce pro-regime attitudes," he said.

This tactic supports Russia's broader objective of sowing "epistemic chaos", or eroding consensus over what is true, leaving audiences disoriented and unsure of how to act, Dr Markelov explained.

And it has been used before, he added, pointing to War on Fakes, another Russian media outlet that positioned itself as a fact-checking initiative while advancing Kremlin-friendly narratives.

(War on Fakes has been found by fact checkers to have spread state disinformation and propaganda. Its creator, Timofey Vasiliev, is now a member of the GFCN.)

As the European Digital Media Observatory recently put it, the GFCN disguises the Kremlin's "false narratives as 'fact checking' by mimicking the aesthetic form but fundamentally contradicting the principles of independence and impartiality".

Russia's underlyling strategy is to "promote the idea that the difference between fake and verified news lies not in the factual accuracy or journalistic standards but in the alleged political affiliation of the sources", the observatory said.

Social media awash with Israel-Iran fakes

Three AI-generated images, of destroyed buildings, a downed fighter jet and hail of missiles

Fact checkers worked overtime this month to stem the flood of viral misinformation following Israel's decision to attack Iran over its nuclear capabilities.

As Iran retaliated, videos claiming to show extensive damage on both sides circulated on social media — offering a lesson in the benefits of healthy scepticism, particularly during wartime.

Numerous scenes of bombed out Israeli cities were generated using artificial intelligence (AI) or even lifted from computer games, according to Full FactAAP FactCheck and AFP Fact Check, while DW identified several AI-generated images of the supposed devastation in Iran.

Scenes of rockets being launched and jets being downed which predated the conflict were also broadcast as current news by official media channels in both countries, France24 reported.

As for a narrative that Iranians had welcomed Israel's attacks, AAP FactCheck found that a clip of Iranians celebrating in the street dated from at least 2023 and that footage of others chanting "we love Israel" had been AI-generated.

According to Snopes, AI was also used to create a misleading aerial image of a downed — and apparently gigantic — Israeli F-35 fighter jet. And a video claiming to show the explosion at Iran's Evin Prison was likely also generated from an old photo, ABC NEWS Verify found, noting that the quality of the video made it hard to be definitive.

America's airstrikes on three Iranian nuclear facilities sparked a fresh round of falsehoods, from claims of US protesters supposedly taking to the streets to support Iran to clips purportedly showing huge explosions near the bomb sites.

In a now-familiar story, one such clip was filmed in Syria in 2024 while another was shared after being stripped of its original "AI-generated" label, according to AAP FactCheck.

It's a reminder to use reverse image search to track down the origin of footage and to look for visual inconsistencies that might signify the use of AI.

These clues are only getting harder to spot, though one potential flag lies in a video's length. Google's Veo 3 is only capable of generating videos up to 8 seconds long, meaning shorter cinematic scenes could — at least for now — signify a potential fake.

AI tools can also struggle with verification. Digital Forensic Research Lab researchers tracked the performance of X's chatbot, Grok, during the early days of the conflict, finding the chatbot had "significant flaws" when it came to providing reliable information and at times offered contradictory answers to similar verification questions asked only minutes apart.

The whip around

A man sits at an electric piano in a darkened room while using a laptop
  • Fraudsters are increasingly using AI tools to generate illegitimate royalties from music streaming platforms. As WIPO Magazine reports, bad actors have always used bots to inflate their streaming numbers. Now, however, they are using AI to generate millions of songs that require fewer plays to produce the same result.

    But while fewer plays might be less likely to raise suspicions, the same can't always be said of the music itself. Just ask Paul Bender, who discovered that a number of "absolutely cooked" tracks had been uploaded to Spotify under his account name. They were likely generated using AI, he said, and "probably the worst attempt at music I've ever heard". Then there is the Velvet Sundown, a band who may or may not exist.

  • It seems even owners of electric vehicles believe misinformation about their cars. That's according to a new study by University of Queensland researchers who analysed people's belief in nine major myths about EVs (which, for the record, are less likely to catch fire than petrol cars).

  • The release of the Reuters Institute's 2025 Digital News Report has revealed global concern among respondents about their ability to tell what is true or false in online news, particularly among people in the US and Africa (73 per cent). Nearly half of all people surveyed (47 per cent) viewed online influencers and politicians as the biggest sources of false or misleading information, but opinion was divided on whether such content should face stricter moderation by social media companies.

  • If you've been wanting to explore how generative AI works but don't know where to start, researchers at QUT's GenAI Lab have got you covered. They've just launched the GenAI Arcade, a new platform where you can test the capabilities and limitations of generative AI tools through a series of interactive games, without the need for any specialist know-how.

  • The Washington Post plans to open its opinion pages to more voices with the help of an AI writing coach, according to the New York Times (which notes that articles would still be reviewed by human editors). The decision is another example of how newsrooms are cautiously integrating AI into their processes, which is the subject of a new report by the non-profit Aspen Institute. The use of AI is not without risk, however, as the Chicago Sun-Times recently discovered when it published a summer reading list featuring several books that turned out to be fictitious.

  • Keen to put your deepfake detection skills to the test? The New York Times has released a short quiz that challenges you to distinguish between real and synthetic videos. We won't say how we scored — but don't feel bad if you don't score a perfect 10.

This newsletter is produced by the RMIT Information Integrity Hub, an initiative to combat harmful misinformation by empowering people to be critical media consumers and producers.

04 July 2025

Share

aboriginal flag float-start torres strait flag float-start

Acknowledgement of Country

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Sentient' by Hollie Johnson, Gunaikurnai and Monero Ngarigo.

More information