The Repost: June 2025

The Repost: June 2025

Good morning,

This month, we investigate the impact of AI-powered bots on social media. Plus, Labor's super proposal sparks a flurry of misinformation, and a Chinese paraglider sails into a verification snag.

AI is creating more bots, and terrible conversations

Two toy robots communicating through a tin-can telephone AI is fuelling more conversational bots, and lots of noise. (Credit: Adobe Stock)

Online bots have become a fixture of the modern media landscape, used to sell products, shape public opinion and manipulate national elections.

Of course, bots — or automated programs that perform repetitive tasks — are nothing new, but they are more prolific than ever. By some estimates, they account for more than half of all internet traffic and 20 per cent of online chatter about global events. On social media, they are also getting harder to spot.

So what's behind the surge in bots, and what does it mean for our social media feeds? We spoke with three bot researchers to find out.

Rise of the noisy bot

Timothy Graham, an associate professor in digital media at Queensland University of Technology, said the arrival of new AI tools such as ChatGPT had meant bots could now be programmed cheaply and easily.

And rather than simply "parroting" content over and over, they can respond to people in a more conversational and realistic way.

But this doesn't mean they have useful things to say. Having recently investigated a series of accounts posting about the Great Barrier Reef, Dr Graham found bots talking to each other in a loop of AI-generated chatter, producing misleading content without any concern for accuracy.

He said this kind of "bot babble" served to create the impression of a groundswell of support around an issue by flooding social media with content.

Dominique Carlon, a research fellow studying bots at Swinburne University of Technology, told The Repost that, thanks to bots, "You can no longer assume that what you see in the comment section is a reflection of reality or representative of wider public sentiment."

At the same time, the presence of bots doesn't always signify something more sinister.

As Elise Thomas, a senior OSINT investigator at the Centre for Information Resilience, noted, most bots are there to make money, so when they comment on social issues, it's more likely because they have been programmed to target "hot topics" than to subvert elections, for example.

"Anything that attracts attention online is going to attract bots," she said.

Learning from 'Lana'

This fact appears to be borne out by "Lana", one of several likely bot accounts The Repost has been tracking since the recent federal election.

Lana's regular posting under the TikTok clips of 7News has led some users to claim she is a bot on the Coalition payroll, though we analysed 60 of her comments and found she expressed no partisan opinions, instead offering neutral and banal takes on daily news.

(Her lukewarm observations include: "The tension is definitely rising as the election heats up. Interesting to see how this unfolds.")

Despite Lana's high followership and human-like comments, there is little doubt she is a bot. Telltale signs include a stolen profile image, an absence of personal information and, more obviously, the occasional comment like "INVALID JSON".

A closer inspection of Lana's activity and followers also reveals other accounts publishing variations of her comments under the same posts at around the same time — all of which suggests she is part of a larger bot network.

So what's the point of these bots?

Dr Graham posited that Lana was likely an attention-management bot, there to "make others feel like [the comments section] is a happening place".

Dr Carlon, meanwhile, suggested Lana could be a fame-enhancement bot, working alongside other paid bots to enhance the visibility and engagement metrics of other accounts.

She said such networks can spread the risk of a fame campaign because if one account is detected, the others can remain active.

Alternatively, Ms Thomas said, the account might be a commercial bot seeking to establish an "authentic" pattern of life, after which it could be stockpiled for later use.

In an ironic twist, as humans increasingly use AI programs in their own lives, they may unwittingly help bots to become better at avoiding detection.

"Increasingly, human users are adopting the phrasing patterns of AI programs such as ChatGPT in their posts and comments," Dr Carlon said.

"When humans copy and paste direct sentences from AI models or slightly rephrase them (or even subconsciously start to imitate them), the boundaries between automated and human content is increasingly blurred."

The whip around

A paraglider sailing above a mountain range Viral footage of a Chinese paraglider showed signs of AI manipulation. (Credit: Adobe Stock)
  • Footage of an ice-encrusted Chinese paraglider being lifted into the Earth's upper atmosphere made global headlines last week, but not everyone was convinced by it. According to ABC News, the viral video showed signs of AI manipulation, including clouds that looked two dimensional and a helmet that changed colour. And while it is plausible the paraglider reached the height claimed (8,598 metres), the claim has not been independently verified.

  • Back home, a government proposal to introduce an additional 15 per cent tax on earnings from multi-million dollar superannuation balances has sparked a flurry of misinformation. Scammers are taking advantage of confusion around the topic, falsely claiming that changes to withdrawal limits and the age at which super can be accessed will take effect in June. But no such changes are being proposed, and Labor's bill is still before parliament, which doesn't sit until July.

    Contrary to some claims, the proposed tax rise will only apply to earnings on the portion of super balances above $3 million. It will not apply to paper profits on non-superannuation assets such as property, as AAP FactCheck has reported.

  • Several European nations were on high alert last month, holding national elections amidst reports of targeted disinformation campaigns. In Romania, presidential elections were reheld after the results of last year's poll were junked following claims of a coordinated social media campaign to artificially boost the popularity of a far-right candidate. According to the BBC, Romanian influencers were recruited to promote pro-Russian candidate Călin Georgescu, while TikTok reportedly discovered and took down a network of 27,000 inauthentic accounts.

  • In a glimmer of good news, a website notorious for producing deepfake porn has been shut down after losing a key service provider. The site, Mr Deepfakes, announced in May that it would not relaunch. Meanwhile, Civitai, an online marketplace for sharing AI-generated images, has banned the sharing of AI programs designed to generate the likeness of real people, citing pressure from payment service providers and new deepfake porn regulations in the United States and European Union.

  • Not bored of bots yet? Check out one of the first: Eliza, created in 1966 by Joseph Weizenbaum. Designed as a Rogerian therapist, Eliza largely reflects her users' questions back onto them. Expect to be infuriated.
RMIT information integrity hub logo

This newsletter is produced by the RMIT Information Integrity Hub, an initiative to combat harmful misinformation by empowering people to be critical media consumers and producers.

06 June 2025

Share

06 June 2025

Share