The Repost #10: March 2026

The Repost #10: March 2026

This month, as deepfakes from foreign Facebook accounts stir up political divisions in Australia, the British far right has embraced a fictional rapper to help sell its anti-immigration agenda. Plus, the Iran war has made a mess of social media, and new research suggests X's algorithm could make you more right wing.

Aussies fall for divisive deepfakes

An AI generated image of Senator Pauline Hanson lecturing angrily while draped in an Australian flag

Australians are being misled by AI-generated deepfakes designed to divide political opinion and drive engagement on social media.

An investigation published by ABC NEWS Verify last week uncovered a constellation of foreign-run Facebook accounts sharing misleading and AI-generated images of Australian politicians.

The team analysed some 370 images posted by 14 accounts over a single week and found that more than half featured One Nation Leader Pauline Hanson — typically portrayed positively as a crusader battling her political opponents.

Other frequent subjects included Prime Minister Anthony Albanese, who fared less favourably, and Peta Credlin, the Sky News host and former chief adviser to prime minister Tony Abbott.

Most of the accounts looked to be chasing money, either through Meta's own content monetisation program or, most commonly, by linking out to fake news websites where users could be served ads.

Both Ms Credlin and Senator Hanson told the ABC they had been forced to field enquiries from the public about the deepfakes, while a (since deleted) "Team Pauline Hanson" Facebook group created by one of the deepfake-posting accounts from Vietnam had swelled to almost 50,000 members.

Meanwhile, AAP FactCheck reported on a Facebook page named "Inside Australia" — run by an account in Sri Lanka — that was posting AI-generated videos of Muslims and immigrants complaining or demanding special treatment.

The title of one such video was "She's mad because her protein bar is not halal!". A screenshot from another clip was shared by a different Facebook page and received more than 15,000 comments, many of them negative and indicating that users had taken the image at face value.

Fictional UK rapper gets political

A new right-wing British political party considered "too extreme" by members of Nigel Farage's Reform UK has embraced AI to spread anti-immigration rhetoric.

The new party, Advance UK, recently released a major campaign video that draws heavily on AI-generated imagery and incorporates music and lyrics from a fictional British rapper named "Danny Bones".

As reported by independent news website The Bureau of Investigative Journalism, Bones and the Advance UK video are both the product of a group that dubs itself the Node Project but about which little else is known.

Bones's songs and video clips are laced with grievance, far-right symbolism and references to "mass migration", and they have attracted hundreds of thousands of views and streams online.

While some people may believe the rapper really exists, seeking to debunk his character as "fakery" may be to miss the bigger point.

Jean Burgess, a distinguished professor at QUT's Digital Media Research Centre, said the extent to which people see Bones's content as "authentic" may depend less on whether they are convinced of his existence and more on whether his content feels "true to experience" for particular audiences.

"[That content] articulates, amplifies, misdirects and weaponises real feelings", she said. "But as far as I know nobody involved is putting a huge amount of effort into convincing the audience that the virtual influencer is a natural person."

The Bones experiment fits within a larger European trend in which the far right has adopted AI to push their own idealised versions of national identity.

The Repost logo, featuring the words "The Repost" against a background of static

Get our next edition delivered straight to your inbox.

Iran falsehoods surge, with help from paid X users

Experts trying to make sense of the war in Iran have told BBC Verify they were seeing unprecedented amounts of increasingly realistic AI-generated content on social media, now including fabricated satellite photos.

Germany's Der Spiegel announced it was retracting several photos of the conflict after discovering they were fakes, and while synthetic content is rife online, plenty of social media users have simply repurposed existing photos and clips from video games or other locations and events.

Premium users on X have been particularly problematic in spreading falsehoods. As reporters with Wired discovered when they reviewed hundreds of X posts containing misinformation about the war: "Almost all of the most viral posts … came from accounts with blue check marks, meaning they pay X for its premium service and could be eligible to earn money based on how much engagement their posts generate, even if the content is false."

The torrent of misleading content prompted X to declare it would suspend users from its "creator revenue sharing" program if they were caught posting AI-generated videos of armed conflicts without adding a disclosure.

But catching users is another question. BBC Verify identified multiple examples where X's own chatbot, Grok, wrongly labelled synthetic images as real. Ireland's RTE also reported instances of Grok spreading misinformation about the war.

Some of the most contested claims of the war to date have centred on the missile that struck an Iranian girls' school, killing more than 160 people.

Claims that Iran was to blame were contradicted by photo and video analyses that suggested the school was hit by a US-made Tomahawk missile — a type of missile Iran does not possess. Satellite imagery showed that several nearby military sites were struck at the same time and that while the building had been part of a naval compound, the school was walled off around a decade ago.

According to media reports, an internal US military report has conceded that blame for the tragedy lay with the Americans.

Could X’s algorithm be making you more right wing?

A new study in Nature has found that the algorithm used by X to recommend its content demotes posts from traditional media outlets and pushes users to adopt more conservative positions.

The study randomly assigned active X users in the US into groups that were asked to use either the algorithmic "For You" feed or the chronological "Following" feed for seven weeks in 2023.

Having measured users' political opinions before and after the trial, it found that those who switched from the "Following" to the "For You" feed were nearly 5 percentage points more likely to prioritise policy issues favoured by US Republicans, such as immigration and crime.

Those users were also more than 7 percentage points less likely to hold a positive view of Ukrainian President Volodymyr Zelenskyy.

Writing for The Conversation, QUT media professor Timothy Graham said one of the most concerning findings was that the study "showed the X algorithm nudged users towards following more right-leaning accounts, and that the new following patterns endured even after switching back to the chronological feed".

"In other words, turning the algorithm off didn't simply 'reset' what people see. It had a longer-lasting impact beyond its day-to-day effects."

In other news

  • Indicator uncovered an AI-powered podcast network publishing 11,000 episodes a day by scraping stories from news media websites. Its analysis of more than 100 episodes found the podcasts "often reused the same facts, structure and even phrases" from media stories and were "often published mere minutes after what seemed to be their source material". As the profits of newsrooms continue to be squeezed, the case offers a neat illustration of how AI threatens their financial viability.

  • The New York Times published a quiz that presents AI-generated and real-world literary paragraphs side by side and asks you to pick a favourite. It's an interesting exercise that gives a good sense of the increasing fluency of large-language models. (And no, spotting the AI content is not as easy as simply counting the number of em dashes.)

  • For those interested in the legal frameworks that regulate false information, RMIT associate professor James Meese has published a new book exploring this very issue. Addressing Misinformation and Disinformation also takes a deep dive into how laws can work together with technical and social responses to tackle an issue that clearly isn't going anywhere.

  • Journalist Hamish Macdonald has filmed a new documentary series that delves into the question of what happens when we can't agree on basic facts. Speaking at a media literacy summit in Sydney this week, Macdonald said what he had learnt left him feeling a little scared about the future. Check out this preview of The Matter of Facts, which premieres on Tuesday, March 24.

This newsletter is produced by the RMIT Information Integrity Hub, an initiative to combat harmful misinformation by empowering people to be critical media consumers and producers.

20 March 2026

Share

20 March 2026

Share