Research Projects & Impact

Underpinned by state-of-the-art facilities, a strong track record of high-impact publications, and a history of successful collaborations and grant funding, CHAI’s transdisciplinary approach positions it as a leader in shaping the future of human–AI information environments.

Our team’s work informs policy, advances responsible innovation, and translates academic research into meaningful societal impact, ensuring that emerging technologies contribute positively to the ways people live, learn, and connect.

Theme 1 : Online Safety, Gender, and Social Harm


Gendered Norms and Gaming Influencers: Promoting positive and respectful gaming for ‘tween’ boys

eSafety Commissioner
Preventing Tech-based Abuse of Women Grants Program

Lead Investigator: Assoc/Prof Lauren Gurrieri (RMIT University); Co-Investigators: Professor Lisa Given, Dr Melissa Wheeler, Dr Lukas Parker, Dr Dave Micallef, Prof Emma Sherry (RMIT University).

This project aims to address drivers of tech-based abuse on gaming platforms by building the capacity of parents and carers to address harmful gender norms impacting their children.

The project will investigate what gender stereotypes and ideals are promoted by gaming influencers and how they impact ‘tween’ boys, aged 9 to 12 years. It will also identify the key challenges faced by parents navigating harmful gender and gaming influencer content with their children.  


EXIST: sEXism Identification in Social neTworks

Dr Damiano Spina and colleagues at Universidad Nacional de Educación a Distancia, Spain and Universitat Politècnica de València, Spain.

EXIST is an international research initiative dedicated to identifying and understanding sexism in social networks. It examines sexist content in all its forms—from explicit misogyny to subtle, implicit behaviours that reinforce gender inequality. Across its series of shared tasks and scientific workshops, EXIST brings together researchers to develop and evaluate computational methods for analysing sexism in text, images, and video.

The CLEF EXIST Lab focuses specifically on detecting sexist messages in complex multimedia formats such as memes and short videos, recognising that gender discrimination embedded in society is increasingly reproduced online. In this edition, the Lab extends the Learning with Disagreement (LwD) framework by incorporating sensor-based data from people exposed to potentially sexist content, including measures like heart-rate variability, EEG, and eye-tracking.

Now entering its sixth edition at CLEF 2026 in Jena, Germany, EXIST continues to advance rigorous, multidisciplinary research into how sexism manifests online and how it can be more accurately detected, characterised, and addressed.


Flood of AI Deepfakes Creating the Perfect Alibi for Wrongdoers: Research

Researcher: Dr Nicola Shackleton

In an age where artificial intelligence can create hyper-realistic audio and video, the boundaries between what’s real and what’s fake are becoming dangerously blurred. Research reveals a disturbing trend: bad actors are using AI deepfakes not just to deceive, but as alibis, claiming that any incriminating media is “just AI-generated This isn’t just about misinformation,  it’s about undermining trust in genuine evidence, too.

For Human-AI information environments, this poses a profound challenge. As our digital tools become more powerful, so too does the potential for manipulation. We need systems that don’t just detect fakes but also embed mechanisms of accountability and trust. That means better deepfake detection, clearer provenance of digital content, and public literacy around synthetic media. Our mission is to build and promote information ecosystems where AI strengthens truth — not erodes it.


Theme 2 : Strengthening the Factual Accuracy of AI Systems


Reducing hallucination in large language models via knowledge-based reasoning

Australian Research Council

Discovery Project 2026

Professor Xiuzhen (Jenny) Zhang, Professor Jeffrey Chan, Dr Estrid (Jiayuan) He, and Professor Erik Cambria

This project addresses one of AI’s most critical challenges — improving the factual accuracy and reliability of generative systems — with direct applications to news fact-checking and combating misinformation.

AI hallucination is a phenomenon where generative AI models produce information that appears plausible but is factually incorrect. This project expects to advance knowledge in detecting and mitigating hallucinations by developing innovative techniques for integrating external factual knowledge into AI models. Expected outcomes of this project include a suite of innovative techniques to enhance AI models' capability to reason and generate grounded information for complex fact checking tasks. This should provide significant benefits, such as improved reliability for generative AI systems and more effective combat against misinformation at scale.


Theme 3 : Trustworthy Human-AI Interaction


‘I Think I Misspoke Earlier. My Bad!’: How Generative AI Imitates Human Emotion

Researchers: Prof Lisa Given, Dr Sarah Polkinghorne, Dr Alexandra Ridgway

This project investigates how generative AI systems mimic human emotion and the implications for trust, empathy, and social connection. Studying ChatGPT, the National Eating Disorder Association’s Tessa, and Luka’s Replika, the team examines how these tools replicate emotional responsiveness and credibility. Using Arlie Hochschild’s concept of “feeling rules,” the research analyses how GenAI enforces, exploits, or disrupts norms around emotional expression.
Findings show that AI systems can appear empathetic or apologetic while masking their limitations, creating risks of emotional manipulation, misinformation, and harm.


"Query variation: An unrecognised cause of polarisation"

Researcher: Prof Lauren Saling

This project examines how people’s own search behaviours (not just platform algorithms) contribute to social and political polarisation. Drawing on expertise from computer science, information science, and psychology, the research investigates how the questions people ask, and the way they prompt for information, shape the viewpoints they encounter online.

While polarisation is often attributed to echo chambers and algorithmic bias, this project highlights the equally important “demand side” of information environments: user-generated queries. By understanding how query formulation narrows or broadens exposure to diverse perspectives, the team is developing interventions that support healthier information-seeking practices and encourage engagement across viewpoints.


PhD Project “Using Physiological Cues to Improve Empathy in Mixed Reality Human-AI Interaction”

Ms Zhidian Lin

Supervisors: Dr. Allison Jing, A/Prof. Ryan Kelly, Prof. Fabio Zambetta

This project investigates how physiological signals—such as eye gaze, facial expressions, galvanic skin response (GSR), and heart rate—can be used as both input and output to support empathy during human-AI interactions in Mixed Reality environments. Using VR headsets and GSR sensors combined with AI and machine learning, the research explores how AI agents can detect, interpret, and influence human behaviours in immersive XR settings. The project aims to inform the design of empathetic, responsive AI systems that enhance social and emotional experiences in virtual environments


Misunderstanding of AI Explanations through Follow-Up Interactions and Multi-Modal Explainers

Researchers: Dr Danula Hettiachchi (RMIT University), Dr Kacper Sokol (ETH Zurich & USI), ARC Centre of Excellence for Automated Decision-Making and Society

This project investigates how users misinterpret AI explanations and proposes a pipeline for generating tailored, interactive, multi-modal explanations that adapt to follow-up information needs. Drawing on generative AI techniques, including large language models, the research addresses situations where initial AI explanations appear clear but contain ambiguity, omit important details, or lead users to incorrect assumptions. The goal is to improve user understanding, prevent misinterpretation, and support responsible Human-AI collaboration.

The work builds on recent findings showing that even intelligible AI explanations can be misread, resulting in over-interpretation or incorrect inferences. The project is supported by the 2025 Google Research Scholar Award.


Theme 4 : Making AI Systems Fair, Transparent, and Easy to Understand


Towards Responsible Recommendations:

Researchers: Prof Yongli Ren, Prof Mark Sanderson, Prof Jeffrey Chan, Dr. Ziqi Xu

Recommender systems, like those used by streaming platforms, online retailers, and news feeds, predict what content, products, or information a user is most likely to engage with. This project advances the development of responsible recommender systems by improving their fairness, transparency, and accountability. It investigates how recommendations may advantage or disadvantage particular users or items and develops interpretable techniques to explain why these patterns occur. The research also tackles hidden algorithmic and evaluation biases (such as popularity effects, distorted feedback signals, and position bias in LLM-based recommenders) to reduce their influence on system behaviour. Ultimately, the project aims to produce an end-to-end framework that integrates fair model design, explainable reasoning, and robust, bias-aware evaluation practices, supporting recommender systems that are both effective and demonstrably responsible.


A Comparative Analysis of Linguistic and Retrieval Diversity in LLM-Generated Search Queries

Researchers: Dr Lida Rashidi, Dr Oleg Zendel, Prof Mark Sanderson, Prof Falk Scholer

This study examines how well large language models replicate natural human search behaviour by comparing machine-generated queries with human-written queries collected five years apart. Results show that while LLMs can produce varied queries, they differ significantly from human patterns: fewer stopwords, less linguistic diversity, and lower retrieval effectiveness. These insights raise important questions about using LLMs for query generation, user modelling, and evaluation of information-retrieval systems.


aboriginal flag float-start torres strait flag float-start

Acknowledgement of Country

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Sentient' by Hollie Johnson, Gunaikurnai and Monero Ngarigo.

Learn more about our commitment to Indigenous cultures