CHAI

CHAI leads the world in creating and advancing safe, ethical, and socially responsible human–AI information environments.

The Centre for Human-AI Information Environments (CHAI) brings together leaders in computing, information, and social sciences to deliver evidence-based, responsible design, development, and evaluation of human-centred, AI-enabled information environments for industry, community, and government.

CHAI is a transdisciplinary research centre shaping the future of human–AI interaction - responsibly, transparently, and with purpose.

Research Approach

CHAI takes a transdisciplinary, human-centred approach to how AI is imagined, designed, built, and governed.

We bring together computing, information science, design, social science, and the humanities to ensure that technical innovation in AI is deeply grounded in human experience, ethics, and social responsibility.

Our researchers:
  • develop and evaluate new AI-enabled technologies, systems, and interfaces
  • pioneer methods and models that bridge computing, design, and the social sciences
  • advance responsible innovation through frameworks that promote fairness, accountability, transparency, and trust
  • address urgent challenges, including misinformation, bias, and harm

We strengthen capability in responsible AI by building talent and creating pathways for impact. We mentor emerging researchers, collaborate with industry, government, and communities, and embed ethical and inclusive design principles into practice.

Our work connects theory to real-world decision-making. With state-of-the-art facilities, a strong track record of research success, and globally recognised expertise, we drive research that informs policy, shapes industry standards, and delivers tools that are socially beneficial and aligned with human values.

We don’t just ask what AI can do.
We ask what AI should do — and we build towards it.

Research Projects & Impact

Underpinned by state-of-the-art facilities, a strong track record of high-impact publications, and a history of successful collaborations and grant funding, CHAI’s transdisciplinary approach positions it as a leader in shaping the future of human–AI information environments. Our team’s work informs policy, advances responsible innovation, and translates academic research into meaningful societal impact, ensuring that emerging technologies contribute positively to the ways people live, learn, and connect.

eSafety Commissioner
Preventing Tech-based Abuse of Women Grants Program

Lead Investigator: Assoc/Prof Lauren Gurrieri (RMIT University); Co-Investigators: Professor Lisa Given, Dr Melissa Wheeler, Dr Lukas Parker, Dr Dave Micallef, Prof Emma Sherry (RMIT University).

This project aims to address drivers of tech-based abuse on gaming platforms by building the capacity of parents and carers to address harmful gender norms impacting their children.

The project will investigate what gender stereotypes and ideals are promoted by gaming influencers and how they impact ‘tween’ boys, aged 9 to 12 years. It will also identify the key challenges faced by parents navigating harmful gender and gaming influencer content with their children.  

Australian Research Council

Discovery Project 2026

Professor Xiuzhen (Jenny) Zhang, Professor Jeffrey Chan, Dr Estrid (Jiayuan) He, and Professor Erik Cambria

This project addresses one of AI’s most critical challenges — improving the factual accuracy and reliability of generative systems — with direct applications to news fact-checking and combating misinformation.

AI hallucination is a phenomenon where generative AI models produce information that appears plausible but is factually incorrect. This project expects to advance knowledge in detecting and mitigating hallucinations by developing innovative techniques for integrating external factual knowledge into AI models. Expected outcomes of this project include a suite of innovative techniques to enhance AI models' capability to reason and generate grounded information for complex fact checking tasks. This should provide significant benefits, such as improved reliability for generative AI systems and more effective combat against misinformation at scale.

Researchers Professor Lisa Given, Dr Sarah Polkinghorne, and Dr Alexandra Ridgway are investigating how generative artificial intelligence (GenAI) tools mimic human emotion, and what that means for trust, empathy, and social connection in digital environments.

The study examines three prominent GenAI systems, OpenAI’s ChatGPT, the National Eating Disorder Association’s Tessa, and Luka’s Replika, to understand how they replicate emotional responsiveness and credibility. Drawing on sociologist Arlie Hochschild’s concept of “feeling rules”, the research explores how these tools exploit, reinforce, or violate social norms around appropriate emotional expression.

The findings reveal that while GenAI systems often appear caring or apologetic, this imitation of empathy can mask their limitations and even create potential for misinformation and harm

News

chai-facilities-1220x732.jpg

Usability Lab

Lead by Dr Damiano Spina and Prof Falk Scholer, School of Computing Technologies.

RMIT’s Usability Lab is a controlled environment where usability research takes place. Including Information Retrieval, Human-AI Cooperation and Evaluation Methodologies.

aboriginal flag float-start torres strait flag float-start

Acknowledgement of Country

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Sentient' by Hollie Johnson, Gunaikurnai and Monero Ngarigo.

More information