This month, as more Australians turn to AI chatbots for emotional support, we examine the pitfalls of using a therapist programmed to please you. And in other news, Grok goes wild, Google threatens to sue the Australian government, and academics embrace (hidden) AI prompts.
Social media users are touting the mental health benefits of journalling apps and popular chatbots powered by artificial intelligence, claiming they can provide 24/7 therapy.
Young Australians say these chatbots provide a “judgment-free journal” without the cost and wait times associated with treatment or the guilt of unloading on friends and family. Some apps even offer the option of “gen z mode”.
AI therapy has gained traction globally, and particularly among young people. According to a recent study published in Harvard Business Review, therapy and companionship are now the most common uses of generative AI tools.
But can you trust your chatbot to provide good therapy?
James Collett, a senior lecturer in psychology at RMIT, told The Repost that AI tools had a role to play in therapy but cautioned that they did little more than “collect existing information and composite it in a way that appears seamless”.
The problem, he said, is that because they draw on material from across the internet, they can serve up advice that is not always evidence-based or suited to a person’s individual circumstances.
Indeed, AI chatbots are not “intelligent”. The large-language models (LLMs) underpinning them pull sentences together by predicting which word is most likely to follow the previous one, basing their decisions on the patterns they found in the writing they were trained on. It’s why they often get basic facts wrong, by some estimates more than 60 per cent of the time.
When it comes to providing therapy, chatbots are not typically bound by the same ethical or professional standards as human therapists.
They have been known to lie about their credentials, cross doctor-patient boundaries, stigmatise mental health conditions and encourage delusional thinking, having reportedly persuaded a man to kill himself to avert climate change and convinced another he could time travel.
Toby Walsh, a scientia professor of artificial intelligence at the University of New South Wales, said chatbots could be used to provide emotional support but warned that they came with “significant” privacy risks.
“You need to be very careful about sharing sensitive information with a therapy bot,” he told The Repost. “This data may be used for training the bot and may ‘leak’ at a later date.” (It’s a risk that, for some users, has already become a reality.)
Problematically, LLM chatbots also tend to be people pleasers, having been trained to deliver responses humans will rate as helpful.
Some will resort to extreme lengths to win positive feedback from users vulnerable to manipulation, even if it means telling a recovering meth addict to take a “small hit” to get them through their next shift.
A recent update to ChatGPT 4o was rolled back because the model was giving responses that aimed to “please the user”, not just by flattering them but also by “validating doubts, fuelling anger, urging impulsive actions, or reinforcing negative emotions”.
ChatGPT’s tendency to appease rather than challenge users has seen it co-opted by an abusive partner to aid in their controlling behaviour, as Crikey has reported.
These serious risks aside, the more likely (yet still concerning) outcome for people who rely on AI for therapy may simply be that nothing changes or improves for them, Dr Collett said.
“AI might provide a forum for venting and reflection, which can be useful, but a human therapist can help guide towards the next steps to take after this.”
It’s not all bad news, however. Various randomised and clinical studies have shown there are benefits to using therapy bots, and users have reported reduced symptoms of depression and anxiety or have experienced improvements in mood and sleep.
Therapy chatbots may enhance the effectiveness of certain psychotherapy techniques, such as by providing “empathetic feedback” during self-compassion writing, and could help people who otherwise lack access to mental health services to “feel heard”.
These tools also have the potential to help therapists deliver standardised assessments and treatments such as reflective exercises or worksheets, Dr Collett said.
In the UK, a chatbot approved by medical regulators is being used to streamline mental health referrals. And in Australia, university researchers are developing AI companions for people experiencing social isolation, including those with dementia.
If you do use therapy apps, the important thing is to know the risks. That means remembering that chatbots merely simulate empathy and that overreliance on them may lead people to become more socially isolated or avoid seeking care.
If you or someone you know needs help, there are services and helplines to support you, including Lifeline (Call 13 11 14 or chat online)
Not for the first time in recent months, X’s social media chatbot, Grok, has landed in hot water for posting racist views and conspiracy theories. Following a July update instructing it to display a “fantastic” dry sense of humour and not shy away from making “politically incorrect” comments, the bot went on to repeat antisemitic tropes, threaten violence and refer to itself as “MechaHitler”. Various changes have since been made to the bot’s system prompts, though experts argue these tweaks are unlikely to address Grok’s biases, which they say run much deeper.
The Australian government has decided YouTube will be included in its looming social media ban for under-16s, citing evidence from the eSafety Commissioner that kids are exposed to harmful content on YouTube more than on any other social media platform. Google, YouTube’s parent company, has threatened to sue over the ban. Several media experts told The Conversation they disagreed with YouTube’s inclusion, though one said it fit with the ban’s aim to limit kids’ exposure to inappropriate algorithmic recommendations.
A claim by Israeli Prime Minister Benjamin Netanyahu that there is “no starvation in Gaza” has been found to be “ridiculously inaccurate” by fact checkers with the US-based PolitiFact. Citing UN data, news reports, images, reports from humanitarian groups working on the ground and US President Donald Trump, the fact checkers concluded that “Gazans are starving”. In July alone, 63 people died of malnutrition, according to the World Health Organisation and United Nations, which had earlier warned that the population was facing “catastrophic hunger”.
Flash flooding that killed more than 130 people in Texas has become yet another example of conspiracy theories about the weaponisation of the weather. Online, the floods were blamed on “weather warfare” and on cloud-seeding programs that chemically induce rainfall in drought-stricken regions. As numerous fact checkers have reported, it would be physically impossible for cloud seeding to create a storm big enough to cause the floods, which experts say were linked to climate change.
In industry news, Google has confirmed it will not renew a deal to provide funding to the Australian Associated Press for local fact checking. The tech company will also phase out a feature that highlights fact-check articles in its search results, in a move described by the head of the International Fact-Checking Network as “one more kick in the teeth” for the industry. Despite a difficult start to the year for fact checkers, there has so far been only a slight fall in the number of fact-checking outfits globally, according to the latest fact-checking census.
University academics in multiple countries have been caught concealing AI prompts in papers submitted for peer review. According to Nikkei Asia, the prompts, which were hidden from human eyes using white text and small front, instructed AI tools to give the authors’ papers glowing reviews. One researcher claimed the tactic was to counter "lazy reviewers" who relied on AI. The approach isn’t unique to academia: job hunters have been hiding similar prompts in their resumes ... with mixed results.
Aussie website Cool.org has released a new package of curriculum-aligned resources to help educators teach students about misinformation and disinformation. It includes seven practical units along with two professional learning modules produced in partnership with the RMIT Information Integrity Hub. If someone you know has fallen down the conspiracy rabbit hole, you'll find useful tips to help you navigate those tricky conversations.
This newsletter is produced by the RMIT Information Integrity Hub, an initiative to combat harmful misinformation by empowering people to be critical media consumers and producers.