Good morning,
This month, we investigate the impact of AI-powered bots on social media. Plus, Labor's super proposal sparks a flurry of misinformation, and a Chinese paraglider sails into a verification snag.
Online bots have become a fixture of the modern media landscape, used to sell products, shape public opinion and manipulate national elections.
Of course, bots — or automated programs that perform repetitive tasks — are nothing new, but they are more prolific than ever. By some estimates, they account for more than half of all internet traffic and 20 per cent of online chatter about global events. On social media, they are also getting harder to spot.
So what's behind the surge in bots, and what does it mean for our social media feeds? We spoke with three bot researchers to find out.
Timothy Graham, an associate professor in digital media at Queensland University of Technology, said the arrival of new AI tools such as ChatGPT had meant bots could now be programmed cheaply and easily.
And rather than simply "parroting" content over and over, they can respond to people in a more conversational and realistic way.
But this doesn't mean they have useful things to say. Having recently investigated a series of accounts posting about the Great Barrier Reef, Dr Graham found bots talking to each other in a loop of AI-generated chatter, producing misleading content without any concern for accuracy.
He said this kind of "bot babble" served to create the impression of a groundswell of support around an issue by flooding social media with content.
Dominique Carlon, a research fellow studying bots at Swinburne University of Technology, told The Repost that, thanks to bots, "You can no longer assume that what you see in the comment section is a reflection of reality or representative of wider public sentiment."
At the same time, the presence of bots doesn't always signify something more sinister.
As Elise Thomas, a senior OSINT investigator at the Centre for Information Resilience, noted, most bots are there to make money, so when they comment on social issues, it's more likely because they have been programmed to target "hot topics" than to subvert elections, for example.
"Anything that attracts attention online is going to attract bots," she said.
This fact appears to be borne out by "Lana", one of several likely bot accounts The Repost has been tracking since the recent federal election.
Lana's regular posting under the TikTok clips of 7News has led some users to claim she is a bot on the Coalition payroll, though we analysed 60 of her comments and found she expressed no partisan opinions, instead offering neutral and banal takes on daily news.
(Her lukewarm observations include: "The tension is definitely rising as the election heats up. Interesting to see how this unfolds.")
Despite Lana's high followership and human-like comments, there is little doubt she is a bot. Telltale signs include a stolen profile image, an absence of personal information and, more obviously, the occasional comment like "INVALID JSON".
A closer inspection of Lana's activity and followers also reveals other accounts publishing variations of her comments under the same posts at around the same time — all of which suggests she is part of a larger bot network.
Dr Graham posited that Lana was likely an attention-management bot, there to "make others feel like [the comments section] is a happening place".
Dr Carlon, meanwhile, suggested Lana could be a fame-enhancement bot, working alongside other paid bots to enhance the visibility and engagement metrics of other accounts.
She said such networks can spread the risk of a fame campaign because if one account is detected, the others can remain active.
Alternatively, Ms Thomas said, the account might be a commercial bot seeking to establish an "authentic" pattern of life, after which it could be stockpiled for later use.
In an ironic twist, as humans increasingly use AI programs in their own lives, they may unwittingly help bots to become better at avoiding detection.
"Increasingly, human users are adopting the phrasing patterns of AI programs such as ChatGPT in their posts and comments," Dr Carlon said.
"When humans copy and paste direct sentences from AI models or slightly rephrase them (or even subconsciously start to imitate them), the boundaries between automated and human content is increasingly blurred."
This newsletter is produced by the RMIT Information Integrity Hub, an initiative to combat harmful misinformation by empowering people to be critical media consumers and producers.