The Repost is back for 2026, in what's shaping up to be a consequential year for information integrity. We kick off with a look at what AI "swarms" and Moltbook, the social media site just for bots, mean for the future of disinformation.
In other news:
The arrival of Moltbook, a new bots-only social media site where AI agents can post and interact with each other, caused quite the stir this month, not least because it revealed a group of them plotting our extinction.
AI agents are essentially digital assistants that can perform complex tasks by collecting data, interacting with their environment and making autonomous decisions to achieve their goals.
Moltbook's agent-to-agent discussions looked intelligent enough to worry some people that the "singularity" had arrived, a declaration that might be premature, given how easily a (human) hacker was able to manipulate the site.
Programming experts told the New York Times that much of the chatter looked impressive but was meaningless, adding that the agents' scheming was merely a reflection of the dystopian sci-fi novels that populate AI training datasets.
Importantly, AI agents follow rules and goals set by humans. So perhaps the key takeaway from Moltbook is, as DFR Lab's Esteban Ponce de León has argued, how easily human intentions and narratives can be "passed through AI systems in a way that obscures their human origin".
According to Mr Ponce de León, this breaking of the "attribution chain" — where a human programs an agent, the agent posts something and the output is read as autonomous AI behaviour — creates a new kind of "deniability shield" for malicious actors engaged in scams and disinformation. That amounts to "a fresh iteration of the liar's dividend, where … human actors disclaim responsibility for what their agents produce," he said.
Which brings us to a new paper published in Science [$], in which an international team of AI and misinformation experts have warned that democracies could soon be threatened by "swarms" of AI agents working covertly to manipulate beliefs and behaviours at massive scale. (See also this free Guardian article.)
AI swarms could work with minimal human oversight, maintain persistent online identities, work across platforms, coordinate to pursue shared goals while varying tone and content, and adapt in real time to clues and human responses, the authors wrote.
Swarms and the actors behind them would become increasingly difficult to detect, able to not only infiltrate all manner of online forums but also poison the data on which other AI models are trained, with knock-on effects for future tools and people's ability to access accurate information.
Agents could — and theoretically already can — help sway elections, and at least one cybersecurity firm has argued that swarms are already here.
Photos shared by Italian police showing two officers being attacked during violent clashes in the city of Turin have kicked off conspiracy theories that the officers were not hurt at all, after a key image was found to have been altered using AI.
A spokesperson for the police said they did not make any edits but admitted they had simply shared the most "viral" photo they had found online. An analysis by Italy's Facta (translation via Google) details how that image evolved from a grainy video still into a crisp photo — and how social media users were quick to spot missing details and inconsistencies in the evidence, even down to the varying degrees to which an officer had shaved his neck.
Summarising the findings in a social media post, Facta’s deputy director put it bluntly: "What’s the moral of the story? Stop using AI to manipulate images of real events if you don’t want people to doubt the reality of what you’re showing."
The US government has decreed that fact checkers, content moderators and others deemed to have "attempted censorship" of its citizens will now be denied visas under the country's skilled migration program.
Officials have been directed to weed out applicants whose work histories include fighting misinformation and even working in "trust and safety", which conceivably includes efforts to combat social media scams and child sexual abuse.
Needless to say, we don't agree that publishing amounts to censorship. And as the International Fact-Checking Network has warned, policies that penalise the pursuit of accuracy send a chilling message to journalists and others worldwide.
But that may be the point, with the State Department sanctioning five Europeans whose work the IFCN has described as "defending the public's right to reliable information" and "civic participation under laws passed by their own democratic governments". The individuals include a former EU commissioner and leaders of organisations such as the Global Disinformation Index and Centre for Countering Digital Hate.
Misinformation has continued to circulate in the months following the antisemitic Bondi terror attack in which two gunmen killed 15 people and injured 40.
The massacre was accompanied by a surge of false claims that misidentified suspects and heroes; criticised police officers; manipulated and fabricated quotes from political leaders; accused victims of staging the attack; and baselessly alleged that Israel was behind it. Some misinterpreted Google Trends data to wrongly claim the attack was foreknown, while others shared old footage to falsely claim pro-Palestine rallies were held the next day.
More recently, social media users have taken aim at Virginia Bell, the former High Court justice tasked with leading a royal commission into the circumstances surrounding the terror attack.
In posts implying she is biased against the Jewish community, some claimed Ms Bell was photographed at a pro-Palestine march over Sydney Harbour Bridge. Others said she had approved the August 2025 march.
However, AAP FactCheck has confirmed that the woman in the photo was not Ms Bell but former SBS newsreader Mary Kostakidis. And the march was approved by a different judge entirely — perhaps no surprise given that Ms Bell retired from her High Court role four years earlier, in 2021.
The Trump administration's immigration crackdown in Minnesota has been a lightning rod for falsehoods and reinforced the importance of visual evidence just as our ability to trust it is being eroded.
Reuters has documented six violent encounters with law enforcement where official narratives have been contradicted by real-world evidence, including the widely reported killings of US citizens Renee Good and Alex Pretti.
In the case of Ms Good, senior administration officials alleged she had "weaponised" her vehicle and driven it into an immigration agent who then shot her. But a forensic video analysis by the New York Times found that the available evidence showed "no indication" the agent had been run over.
The White House claimed Mr Pretti was killed by customs and border officials while "brandishing" a gun, but multiple video analyses by Bellingcat and others showed him approaching the officers with only a camera in hand. Although he had a licensed gun in his belt, the available evidence showed he was shot after it had been confiscated, and as he lay restrained on the ground.
In response to the killings, Wired has published some great practical tips for filming so that your evidence isn't dismissed as AI. There has, after all, been no shortage of people using AI tools to complicate the picture, such as social media users "enhancing" images and falsely claiming to have unmasked Ms Good's shooter. The White House also published a photo of an attorney being arrested which had been edited to make it look like she was crying.
Elon Musk's AI chatbot, Grok, spent the festive season flooding X with sexualised and nearly nude images of women and children after a new editing feature for videos and images was announced on Christmas Eve.
The change meant users could request that Grok edit photos to undress real people and place them in sexualised positions. The resulting deepfakes sparked global outrage and triggered investigations in multiple countries for potential violations of laws against child sexual abuse material and nonconsensual sexual imagery.
Australia's eSafety Commissioner confirmed it was investigating several reports of sexualised or exploitative imagery generated by Grok. Several experts told Reuters that Grok's developers, xAI, had ignored earlier warnings from civil society and child safety groups that their chatbot risked unleashing "a torrent of obviously nonconsensual deepfakes".
"Nudifiers", or AI tools that digitally undress people, are nothing new, but X's change lowered the barrier to entry and brought these tools from the fringes of the internet into the mainstream. (Not that they were completely fringe. An audit by Indicator found that Meta platforms ran 25,000 nudifier ads last year.)
On January 15, X's safety team announced it had made changes to prevent the "Grok account on X" from editing images of real people in revealing clothing — though the Guardian later said this was still possible via Grok's standalone app.
This newsletter is produced by the RMIT Information Integrity Hub, an initiative to combat harmful misinformation by empowering people to be critical media consumers and producers.