The Repost #8: December 2025

The Repost #8: December 2025

"AI slop" hit the big time in 2025, clinching the title of Macquarie Dictionary's word of the year. In our final edition of this year, we explore how digital garbage overran the internet, and why it's unlikely to go away any time soon. 

We also look at "poisoned pixels" as a defence against deepfakes and how misinformation on freebirthing and "T maxxing" is putting people at risk. Plus, we’ve launched a one-stop shop of essential digital literacy resources.

How the internet drowned itself in slop

An AI-generated image of computers floating down and sinking into a river of sludge

Believe it or not, it's been more than 18 months since the world was introduced to Shrimp Jesus, the half-human, half-crustacean harbinger of what we all now call "AI slop".

The AI-generated undersea creation exemplified the kind of low-effort, shoddy and often meaningless content that slop describes, helping propel the term into the public consciousness several years after it was coined.

Now, as 2025 draws to a close, the internet is choking on synthetic junk. The ever-expanding slop buffet includes book slop, game slop, music slop, web slop and more besides, all of it signalling a broader decline in platform and information quality, in a process sometimes referred to as "enshittification".

In August, nearly 10 per cent of the world's fastest growing YouTube channels featured nothing but AI-generated content, according to an analysis conducted by the Guardian.

More recently, a digital marketing firm estimated that more than half of all new articles published online at the start of 2025 were generated by AI. (These results come with caveats, and growth appears to have peaked).

Even scientific journals are publishing papers riddled with meaningless AI-generated phrases and images that might politely be described as "anatomically incorrect".

But slop's presence is perhaps most keenly felt on social media, where, as one technologist recently explained, it competes for users' limited attention spans and displaces higher-quality, more helpful material. 

The blame lies partly with platforms' own content monetisation programs, which incentivise the industrial-scale production of frivolous (and often very weird) content by paying users for creating viral posts and videos.

That job is made easier and cheaper by AI tools, and it can be lucrative work. 

"Slop farms" reportedly net some creators upwards of $US5,000 ($7,600) a month, with success stories including an account that regularly posts AI-generated videos of an old man who talks about soiling himself. 

Some creators use in-app subscription features to solicit funding directly from audiences. Several Facebook accounts followed by The Repost, for example, request a small fee to support their work, which appears to include nothing but posting fabricated news and videos of tech billionaire Elon Musk.

Alongside these "spammers and scammers", whose content social media algorithms has been shown to amplify, a marketplace of creators promising to share their "revenue farming" secrets has also sprung up.

And while the sheer volume and inanity of slop is a major problem, it's not the only danger. 

The quest for virality can lead some creators to embrace violent and shocking content, as in the case of accounts dedicated to posting AI-generated videos of immigration deportations or nothing but women being shot in the head.

Digital platforms have started taking action to address the slop problem, citing its corrosive effects on user experiencesPinterest and TikTok, for example, both recently introduced controls that let users limit the amount of AI content shown in their feeds. 

Most major platforms have also agreed to adopt a common metadata standard for validating the authenticity of digital media, including AI-generated content. 

Developed by the multi-industry coalition C2PA, the standard comprises a system of metadata credentials that, working like digital stamps, record the origins of a media file and any edits made to it.

But it is early days, and an October audit of five major platforms' content labelling systems undertaken by Indicator journalists found that only 30 per cent of test posts were correctly picked up as being AI-generated.

Pinterest fared best at just 55 per cent, while Google's and Meta's platforms "regularly failed to label content that had been created using their own generative AI tools", the audit found.

As platforms refine their systems, the European Union is developing mandatory rules on the labelling of AI-generated content as part of its AI Act, which comes into force in late 2026.

China and Spain have passed similar legislation, while new laws in California, home to 32 of the world's top 50 AI companies, will require platforms to clearly identify any AI-generated content or chatbots from August 2026.

How these rules will play out remains to be seen, especially given uncertainty around how well users actually respond to disclosure warnings. 

For people who can’t wait for things to improve, options include switching from the big social media platforms to invite-only online communities with stricter content controls and using search engines that offer greater flexibility to limit AI results. (At the extreme end, there's the Slop Evader browser tool, which turns back the clock to a pre-ChatGPT internet.)

But as we move into 2026 and beyond, slop may only get weirder, with researchers warning that when AI tools are trained on data that is increasingly saturated with AI-generated content, their outputs degenerate and eventually collapse into incoherent nonsense.

Judging by the (at times horrific) festive slop we've already seen — including these Christmas classics — that outcome may not be so far off. 

Computer icons and the word "e-learning" superimposed over a person typing on a laptop

Your one-stop shop for digital learning

We have launched a new searchable catalogue of essential digital literacy resources. 
 
Designed to help teachers and learners of all ages navigate the online information ecosystem, it offers easy access to a host of learning resources drawn from across the internet — including games, quizzes, toolkits, online modules, lesson plans and more.

The whip around

  • Monash University researchers have teamed up with the Australian Federal Police to create a new tool for disrupting the creation of AI-generated child abuse material, deepfakes and extremist propaganda. Called Silverer, the prototype tool adds a subtle pattern of pixels, or a "dose of digital poison" to an image. If the image is fed into an AI training dataset, the AI tool is tricked into producing images that are "very low-quality, covered in blurry patterns, or completely unrecognisable".

  • Users on Meta's platforms are being exposed to an average of 15 billion "higher risk" scam ads each day, Reuters has revealed. The estimate appeared in previously unreported internal company documents, which show Meta also earns a tidy profit from fraudulent ads. In 2024, advertising for scams and banned goods was expected to bring in 10 per cent of the company's total annual revenue. Meta disputes the latter figure but has declined to provide a revised estimate.

  • Remember Grokipedia? Indicator has taken a closer look at the Musk-owned AI encyclopedia, and the results aren't pretty. More than half of the site's entries were at least partly copied from its rival, Wikipedia, and while not always identical, the articles were often "highly similar". A smaller number, often on topics flagged by Wikipedia as "controversial", were found to have been "significantly rewritten to highlight a specific narrative". Grokipedia also cites a host of blacklisted or poor-quality sources, including conspiracy sites, that Wikipedia editors avoid. These sources were cited 2.6 million times across almost 890,000 entries.

  • A Melbourne influencer has become the subject of a public warning from Victoria's health complaints watchdog as it investigates allegations that she "facilitated and/or participated in home births which may put both mothers and babies at risk". The warning follows a series of deaths linked to "freebirthing", or homebirths that eschew the presence of a registered healthcare professional. Billed as a "natural" alternative to traditional midwifery, the practice is gaining popularity online but has also been described as a "cult" in which unlicensed birth coaches profit from giving unscientific and sometimes life-threatening advice.

  • The Guardian has uncovered a group of TikTok influencers collaborating with UK medical clinics to market blood tests to men as a route to testosterone replacement therapy. Testosterone is being promoted as a lifestyle supplement to counteract problems such as low energy, poor concentration and reduced libido. But medical experts warn that unnecessary testosterone use, often referred to as "T maxxing", can suppress the body's natural hormone production, cause infertility and increase the risk of blood clots, heart issues and mood disorders.

  • As we round out a year that began with Meta pulling the plug on its US fact-checking program, fact-checking stalwart Glenn Kessler has used a speech in Stockholm to survey the state of the industry. He offers a sobering assessment of the headwinds faced by fact checkers globally and how attitudes towards truthfulness – among politicians and the public – have deteriorated. Mr Kessler, who led the Washington Post's fact-checking efforts for 14 years, argues that fact checkers' best defence is transparency in everything they do and that they must continue "to make truth visible, persistent, and credible enough to matter".

  • The holiday season is nearly here, bringing joy and some potentially awkward conversations with conspiracy-minded relatives. So, how do you get through it without ruining Christmas? We've rounded up some useful articles to help you out. Some key takeaways: avoid shaming, find common ground, take time to understand each other and, if you can, validate their feelings without validating their beliefs. No promises!

This newsletter is produced by the RMIT Information Integrity Hub, an initiative to combat harmful misinformation by empowering people to be critical media consumers and producers.

05 December 2025

Share

05 December 2025

Share