The Repost #7: November 2025

The Repost #7: November 2025

As OpenAI's latest app, Sora 2, takes AI-generated video to new levels, we examine how tech companies' pursuit of new products is challenging our ability to believe what we see.

We also take a look at Elon Musk's "Grokipedia" and the psychology behind why some people double-down on easily disproven claims.

What happens when 'realism' trumps reality?

A crowded movie theatre overlaid with cracked glass

"Move fast and break things" has been the guiding philosophy of Silicon Valley startups for nigh on 20 years, but the launch of OpenAI's Sora 2 last month suggests a willingness by tech companies to shatter things at an ever-increasing pace — not least our shared reality.

The AI-powered app, which can create convincing videos from any text prompt, quickly became the most downloaded product on Apple's US app store, and its users wasted no time in flooding social media with reality-bending clips.

While it might be fun to watch aliens compete on MasterChef or Pikachu fail at standup comedy, viral videos of OpenAI CEO Sam Altman shoplifting or sporting a Nazi uniform highlight the technology's more sinister potential.

The web is littered with examples of people, young and old, believing Sora-generated videos to be genuine.

In the US, authorities have asked kids to stop pranking their parents with fake home-invasion clips that risk triggering police callouts. Meanwhile, Fox News has had to issue an embarrassing correction after running a story in which its reporters mistook Sora generations as real people complaining about cuts to welfare.

To make matters worse, services for removing the tiny "cloud" watermark Sora uses to flag AI content have proliferated since its release.

OpenAI says it has built safety into the app "from the very start". However, there are many examples of its rules being either inadequate or developed only after the damage is done.

Misinformation researchers told the Guardian that Sora's safeguards were effectively "not real", finding they could generate videos of mass-shooting and bomb scares despite its terms prohibiting content that promotes violence.

Users have also been sharing prompts to bypass the app's restrictions on sexual and other content, while a civil rights group has documented how the lack of an explicit ban on "targeted hate" has facilitated the spread of racism, homophobia and misogyny.

NewsGuard has tested Sora's ability to produce misinformation when promoted, finding it generated videos promoting provably false claims, among them Russian disinformation talking points, in 80 per cent of test cases.

The increasing realism of AI-generations is eroding our ability to trust image-based media, and this doesn't just make it easier to fall for fakes; it also allows people to weaponise doubt to avoid accountability for their actions.

"This is because they are able to allege that images or videos of them engaging in criminal activity, sexual harassment, or racism are in fact 'fake' and generated by technology to damage their reputation", Nicole Shackleton, a law lecturer at RMIT University, told The Repost.

Legal scholars have dubbed this problem the "liar's dividend", arguing that liars become more credible as awareness of inauthentic media grows, because a sceptical public "will be primed to doubt the authenticity of real audio and video evidence".

Even before Sora's latest release, there were attempts to dismiss real videos as "deepfakes", such as when lawyers for Tesla tried to walk back its CEO's overblown claims, or when an Australian politician tried to discredit footage that showed him snorting a white substance.

Some people are now finding it harder to convince others to believe things they really have done.

Mr Altman has acknowledged some of the risks presented by his company's app, laying out several principles he said would "guide us towards more of the good and less of the bad".

"The bad" has turned out to include the digital recreation of celebrities without their consent — which, following an intervention by the US actors' union, led the platform to promise greater enforcement of its rule that requires people to "opt-in" before their likeness can be used.

But that rule applies only to the living, leaving users free to generate videos of dead celebrities from John F Kennedy to Michael Jackson, something relatives and experts have argued risks rewriting history and distorting the legacies of public figures.


OpenAI has claimed there are "strong free speech interests in depicting historical figures" , though the company appears to have softened its stance, having "paused" depictions of Martin Luther King Jr. after the US civil rights leader was made to star in numerous disrespectful videos.

(The company says authorised representatives of the dead can now "request" that the person's likeness not be used; it has also said this ability to opt out is for the "recently deceased".)

Sora 2's flagship feature, "Cameos", which allows users to generate their own digital likeness has also come under fire.

Users were initially able to choose who else could use their cameo, and the app disabled downloads for videos featuring cameos of other people. But within a week of the app's launch, a stream of offensive and defamatory videos forced the company to introduce new controls allowing users to specify in advance how their cameo could be used.

This change still places the onus on individuals to anticipate the potentially harmful depictions they need to opt out of.

It also does little to address the number of workarounds users have discovered for recording and extracting clips from the platform, meaning videos can be reshared and put to potentially harmful use.

The whip around

  • Tech billionaire Elon Musk has launched a rival to Wikipedia, named Grokipedia. The site's entries are authored and "fact checked" by Musk's AI chatbot, Grok, which Repost readers will recall has a history of sharing racist, conspiratorial and inaccurate content.  Mr Musk has accused Wikipedia, with its transparent editorial policies and volunteer editors, of being "controlled by far-left activists". But he seems happy to recycle Wikipedia content on his site, albeit with an added dash of right-wing bias.

  • Australians can now access Google's "AI mode", a new search tool the company says can answer complex queries and even plan holidays. The move continues Google's evolution into an "answer engine" that collects and publishes information rather than simply referring users to other sites. Not everyone will welcome the announcement, however. The ABC reports that news sites have already suffered dramatic falls in search traffic in the year since Google introduced its "AI overviews" feature.

  • UK coroner has ruled that a conspiracy theorist's alternative health beliefs contributed "more than minimally" to her daughter's death. The mother, a former nurse who was struck off the health register for spreading COVID-19 misinformation, influenced her daughter to refuse chemotherapy in favour of juices and coffee enemas for what was a likely treatable case of non-Hodgkin lymphoma. The tragic story unfolded as UK health professionals warn that misinformation is leading more parents to reject medical interventions, such as vaccinations, for their children.

  • Fact checkers with Aos Fatos have taken aim at false claims that Palestinians suffering from famine and death due to the Israeli military occupation of Gaza are paid actors. The Brazilian fact checkers documented 75 social media posts that used the terms "Pallywood" or "Gazawood" to suggest desperate scenes depicted in online photographs and videos were staged. The "Pallywood" theory, a portmanteau of Palestine and Hollywood, has continued to circulate despite being debunked repeatedly by fact-checking organisations.

  • People who believe easily disproved claims often prioritise symbolic displays of strength and independence over facts, according to a new psychological study. The researchers surveyed participants from eight countries, finding those who believed COVID-19 misinformation tended to see adherence to health protection measures as giving in or "losing". Individuals with this mindset may be more resistant to fact-checking efforts because "literal truth is not the point", the researchers concluded. "What matters is signalling one isn't listening and won't be swayed."

  • With Cybersecurity Awareness Month having just wrapped up, here's a horror story for people using AI assistants to manage their schedules. Researchers have shown how Google's Gemini can be hijacked by sending it a calendar invite "poisoned" with hidden prompts. Once read by the assistant, the prompts allow hackers to download files, start Zoom calls and even control smart-home systems. Oh, and watch out for scams using fake CAPTCHA tests on compromised websites. These work by asking people to install a package containing malware as part of the test to prove they are "not a robot". Stay safe!

This newsletter is produced by the RMIT Information Integrity Hub, an initiative to combat harmful misinformation by empowering people to be critical media consumers and producers.

07 November 2025

Share

07 November 2025

Share