As OpenAI's latest app, Sora 2, takes AI-generated video to new levels, we examine how tech companies' pursuit of new products is challenging our ability to believe what we see.
We also take a look at Elon Musk's "Grokipedia" and the psychology behind why some people double-down on easily disproven claims.
"Move fast and break things" has been the guiding philosophy of Silicon Valley startups for nigh on 20 years, but the launch of OpenAI's Sora 2 last month suggests a willingness by tech companies to shatter things at an ever-increasing pace — not least our shared reality.
The AI-powered app, which can create convincing videos from any text prompt, quickly became the most downloaded product on Apple's US app store, and its users wasted no time in flooding social media with reality-bending clips.
While it might be fun to watch aliens compete on MasterChef or Pikachu fail at standup comedy, viral videos of OpenAI CEO Sam Altman shoplifting or sporting a Nazi uniform highlight the technology's more sinister potential.
The web is littered with examples of people, young and old, believing Sora-generated videos to be genuine.
In the US, authorities have asked kids to stop pranking their parents with fake home-invasion clips that risk triggering police callouts. Meanwhile, Fox News has had to issue an embarrassing correction after running a story in which its reporters mistook Sora generations as real people complaining about cuts to welfare.
To make matters worse, services for removing the tiny "cloud" watermark Sora uses to flag AI content have proliferated since its release.
OpenAI says it has built safety into the app "from the very start". However, there are many examples of its rules being either inadequate or developed only after the damage is done.
Misinformation researchers told the Guardian that Sora's safeguards were effectively "not real", finding they could generate videos of mass-shooting and bomb scares despite its terms prohibiting content that promotes violence.
Users have also been sharing prompts to bypass the app's restrictions on sexual and other content, while a civil rights group has documented how the lack of an explicit ban on "targeted hate" has facilitated the spread of racism, homophobia and misogyny.
NewsGuard has tested Sora's ability to produce misinformation when promoted, finding it generated videos promoting provably false claims, among them Russian disinformation talking points, in 80 per cent of test cases.
The increasing realism of AI-generations is eroding our ability to trust image-based media, and this doesn't just make it easier to fall for fakes; it also allows people to weaponise doubt to avoid accountability for their actions.
"This is because they are able to allege that images or videos of them engaging in criminal activity, sexual harassment, or racism are in fact 'fake' and generated by technology to damage their reputation", Nicole Shackleton, a law lecturer at RMIT University, told The Repost.
Legal scholars have dubbed this problem the "liar's dividend", arguing that liars become more credible as awareness of inauthentic media grows, because a sceptical public "will be primed to doubt the authenticity of real audio and video evidence".
Even before Sora's latest release, there were attempts to dismiss real videos as "deepfakes", such as when lawyers for Tesla tried to walk back its CEO's overblown claims, or when an Australian politician tried to discredit footage that showed him snorting a white substance.
Some people are now finding it harder to convince others to believe things they really have done.
Mr Altman has acknowledged some of the risks presented by his company's app, laying out several principles he said would "guide us towards more of the good and less of the bad".
"The bad" has turned out to include the digital recreation of celebrities without their consent — which, following an intervention by the US actors' union, led the platform to promise greater enforcement of its rule that requires people to "opt-in" before their likeness can be used.
But that rule applies only to the living, leaving users free to generate videos of dead celebrities from John F Kennedy to Michael Jackson, something relatives and experts have argued risks rewriting history and distorting the legacies of public figures.
OpenAI has claimed there are "strong free speech interests in depicting historical figures" , though the company appears to have softened its stance, having "paused" depictions of Martin Luther King Jr. after the US civil rights leader was made to star in numerous disrespectful videos.
(The company says authorised representatives of the dead can now "request" that the person's likeness not be used; it has also said this ability to opt out is for the "recently deceased".)
Sora 2's flagship feature, "Cameos", which allows users to generate their own digital likeness has also come under fire.
Users were initially able to choose who else could use their cameo, and the app disabled downloads for videos featuring cameos of other people. But within a week of the app's launch, a stream of offensive and defamatory videos forced the company to introduce new controls allowing users to specify in advance how their cameo could be used.
This change still places the onus on individuals to anticipate the potentially harmful depictions they need to opt out of.
It also does little to address the number of workarounds users have discovered for recording and extracting clips from the platform, meaning videos can be reshared and put to potentially harmful use.
This newsletter is produced by the RMIT Information Integrity Hub, an initiative to combat harmful misinformation by empowering people to be critical media consumers and producers.