Empire of AI: Karen Hao in Conversation at RMIT

Empire of AI: Karen Hao in Conversation at RMIT

This month, RMIT hosted an In Conversation event with acclaimed journalist and author, Karen Hao, exploring the history, content and future of Artificial Intelligence (AI).

The event was organised by RMIT Professors Lisa Given, Jonathan Kolieb and Falk Scholer and sponsored by the University’s Social Change Enabling Impact Platform (SCEIP), the Centre for Human-AI Information Environments (CHAI) within the STEM College, and the Business and Human Rights Centre (BHRIGHT) within the College of Business and Law (CoBL).  

Given, one of the hosts, said that Hao’s work and her book are focused on some of the critical challenges society faces when engaging with new tech as we think of it. 

“In particular, looking at OpenAI as an organisation, Sam Altman and some of the practices that she has seen on the ground as a journalist looking at AI,” she explained.

Analysing AI with a forward perspective, Hao’s work stresses not only the need to understand the issues AI will create for our society but also the urgency to pre-emptively develop appropriate policy. 

Given said there is a need to understand where a system like ChatGPT comes from.  

“We also need to understand what the questions are that we should be asking about the companies that develop, create, and deploy these kinds of systems into our world,” she said.

Hao explained that the purpose of her book is to cut through the confusion and noise surrounding AI, and to challenge the narratives pushed by large-scale tech companies.

“I was quite shocked and disappointed when ChatGPT came out; how it totally reset a lot of conversations within the AI space to be once again dominated by Silicon Valley's narratives.”

“There was so much rich accountability work that had been done beforehand that had just been overwritten and I felt like it was really hard for the average person in the public to actually then have a real understanding of this technology.” 

19 September 2025

Share

Karen Hao and Dr Kobi Leins in discussion on stage

Ethical Dilemmas and the Need for Further Research

Hao discussed how the scale in which AI is being developed works to benefit the tech industry and not consumers themselves. 

“The AI industry has created what they call scaling laws, which is basically an observation that the more data that you put into AI models, and the larger the supercomputer that you use to train these AI models, the more powerful they seem to be.”

“Due to methods such as these, the technologies that they build are not actually built to serve us. We are serving the tech industry.”

Given the vast scale of data accumulation, Hao emphasised the need for greater research to be undertaken and considered in the development of AI.  

“There was a vast array of other research that was happening that demonstrated that you could actually advance AI research with tiny amounts of data and computers,” she said.

“People were training powerful AI models on mobile phones, and all other kinds of ideas around just how to create more efficient AI systems that are, in fact, more robust in some ways than these colossal models.”

“These colossal models often break down, and we don't really know how they work because we don't really know what we put into them. But OpenAI decided to take this scaling approach”,  

The themes of Hao’s work closely align with research underway at the Centre Human-AI Interaction (CHAI), a newly established Leading Research Centre in RMIT’s STEM College that brings together researchers from technology and social sciences disciplines across all three of RMIT’s Colleges. 

CHAI combines computer scientists and engineers together with people who are experts in social research, working with external groups and organisations to look at how society can critique and evaluate existing systems and how we can build better systems from the ground up.

Group photo with Karen Hao in centre

The Future of AI

Going forward, Hao emphasised that she is not against the general development of AI.  She explained these systems should be developed with societal challenges in mind.

“The vision that I see for AI is one in which people first centre what are the things that we actually need in this world as people, and then, what aspects of those problems actually lend themselves to the strengths of AI?” she said.

Hao stressed that achieving community driven, participatory AI requires collective resistance to current industry practices. “There are actually communities that have successfully stalled data centre development because the data centre is not at all benefiting the communities and these companies have entered into the community without any transparency whatsoever.”

“I think if we can have the goal of that resistance and rely on democratic compensation along with the whole supply chain of AI, that's how the empire falls and then we shift towards a more democratic vision of AI development.”

 

Story: Claudia Lavery & Finn Devlin 

19 September 2025

Share

  • CoBL
  • STEM

Related News

aboriginal flag float-start torres strait flag float-start

Acknowledgement of Country

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Sentient' by Hollie Johnson, Gunaikurnai and Monero Ngarigo.

More information