Research Projects & Impact

Underpinned by state-of-the-art facilities, a strong track record of high-impact publications, and a history of successful collaborations and grant funding, CHAI’s transdisciplinary approach positions it as a leader in shaping the future of human–AI information environments.

Our team’s work informs policy, advances responsible innovation, and translates academic research into meaningful societal impact, ensuring that emerging technologies contribute positively to the ways people live, learn, and connect.

Projects


Gendered Norms and Gaming Influencers: Promoting positive and respectful gaming for ‘tween’ boys

Funder: eSafety Commissioner – Preventing Tech-Based Abuse of Women Grants Program
Team: Lead Investigator: Assoc. Prof. Lauren Gurrieri (RMIT University)
Co-Investigators: Prof. Lisa Given, Dr Melissa Wheeler, Dr Lukas Parker, Dr Dave Micallef, Prof. Emma Sherry (RMIT University)

This project examines how gender stereotypes and ideals promoted by gaming influencers shape the attitudes and behaviours of ‘tween’ boys aged 9–12. By analysing the content and influence of these online figures, the research explores how harmful gender norms contribute to tech-based abuse on gaming platforms and how these behaviours emerge among young users.

The project also investigates the challenges parents and carers face when navigating gaming influencer content with their children. Insights will be used to build parents’ capacity to recognise, discuss, and counter harmful messaging, supporting more positive, respectful, and safe engagement in online gaming environments.


Reducing hallucination in large language models via knowledge-based reasoning

Funder: Australian Research Council – Discovery Project 2026
Team: Prof Xiuzhen (Jenny) Zhang, Prof Jeffrey Chan, Dr Estrid (Jiayuan) He, Prof Erik Cambria

This project tackles one of the most pressing challenges in artificial intelligence: improving the factual accuracy and reliability of generative models. Focusing on the phenomenon of AI hallucination—where systems generate plausible but incorrect information—the research aims to advance new methods for integrating verified, external knowledge into large language models. This has direct applications for news verification, information integrity, and reducing misinformation at scale.

The project will develop novel techniques that enable AI systems to reason more effectively and produce grounded, evidence-based outputs for complex fact-checking tasks. Expected outcomes include enhanced reliability and trustworthiness of generative AI technologies, contributing to safer, more accurate Human-AI information environments.


Protecting Australia from Online Abuse: Making Online Safety Work for All

Funder: 2026 ARC Discovery Early Career Researcher Award (DECRA)
Team:
Lead Investigator: Dr Senuri Wijenayake 

This research program addresses one of Australia’s most pressing digital challenges: the widespread and growing impact of online abuse. More than 70% of Australians have experienced at least one incident of online harm, from offensive comments to targeted hate speech, with the burden falling disproportionately on gender-diverse people, people with disabilities, and culturally diverse communities. These groups rely heavily on social media platforms for social connection, identity exploration, and access to support services, yet are rarely included in the design of online safety mechanisms.

The project aims to design, test, and evaluate novel, user-centred safety features that move beyond reactive moderation and toward preventive, community-based interventions. By collaborating directly with affected users and experts in interaction design and policy, the research will produce empirically validated prototypes, cross-platform design guidelines, and policy recommendations that enhance the feasibility and adoption of new safety solutions. It will also develop evaluative frameworks to assess future online safety innovations. Together, these outcomes will advance Australia’s capacity for online safety research and support safer, more inclusive digital environments for vulnerable communities who depend on these platforms for meaningful participation and connection.


ARC Research Hub for Intelligent Contaminant-Sensing in Complex Environments (IC-SensE Hub)

Director: Prof Sumeet Walia
Chief Investigators: Prof Mark Hutchinson, Prof Noushin Nasiri, Prof Lisa Given, A/Prof Ivan Lee, Prof Priyadarsini Rajagopalan, Prof Akram Hourani, Prof Kandeepan Sithamparanathan, A/Prof Jiawen Li, A/Prof Aaron Elbourne, A/Prof Sang Heon Lee, Dr Danny Wong, Dr Ylias Sabri, Dr Xiaoning Liu, Dr Zlatko Kopecki, Dr Nisa Salim, Dr Saffron Bryant

The IC-SensE Hub aims to transform Australia’s environmental monitoring ecosystem into a user-responsive, technology-driven industry serving agriculture, water systems, and built environments. Australia currently lacks real-time, comprehensive systems for detecting chemical and biological contaminants across air, water, and soil—limitations that reduce productivity, impede effective risk management, and increase environmental and public health vulnerabilities.
The Hub responds to these challenges by developing miniaturised, AI-enabled sensing technologies capable of instant contaminant detection and predictive hazard forecasting across diverse and complex settings. These innovations will allow industries to monitor and respond to emerging risks before they escalate, enabling faster intervention and more resilient environmental management practices.

Expected outcomes include a new generation of autonomous sensing and forecasting capabilities, significant reductions in emissions, improved industrial productivity through optimised environmental operations, and enhanced public health outcomes as contaminants are identified and addressed more rapidly. The technologies developed through the Hub will have broad relevance for adjacent sectors—including healthcare, transport, space, and defence—positioning Australia as a global leader in clean technology and environmental stewardship. To maximise national benefit, the Hub will engage deeply with industry partners, develop knowledge-transfer and training programs, and implement accessible communication strategies that translate complex sensing and AI innovations into actionable insights for policymakers, industry leaders, and the wider community.

aboriginal flag float-start torres strait flag float-start

Acknowledgement of Country

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Sentient' by Hollie Johnson, Gunaikurnai and Monero Ngarigo.

Learn more about our commitment to Indigenous cultures