Claiming AI can lead to human extinction is an overreaction: RMIT AI experts

Claiming AI can lead to human extinction is an overreaction: RMIT AI experts

AI experts comment on The Center for AI likening AI risks to pandemics and nuclear war.

Professor Matt Duckham, Director, Information in Society Enabling Impact Platform

Topics: AI, overreaction, disruption, discrimination. 

“The statement could be viewed as an astonishing overreaction, and something I expect its signatories will look back on sheepishly in the coming month and years. 

“To compare this technology with the truly existential threats we face today such as climate change, war, pandemics, is simply absurd. 

“No matter how surprising or remarkable the new AI capabilities are, this technology is just statistical models of word frequencies.

“The technology nevertheless marks a remarkable and exciting milestone in AI. It is surely causing big changes and disruptions in many industries and sectors of society, which are only set to grow over the next few years. 

“Many of those disruptions will be negative, but hopefully many more will be positive. None will be apocalyptic though. 

“The real harm of this technology lies in their subtle amplification of discrimination, inequity, exploitation, and entrenched advantage – which are already evident and growing in the use of AI in industry, institutions, and society for some time.”

Professor Matt Duckham is Director of the RMIT Information in Society Enabling Impact Platform, with expertise in spatial computing, geo-AI and geo-visualisation.  

Professor Lisa Given, Enabling Impact Platform Director, Social Change, Research and Innovation Capability

Topics: AI panic, AI tools, risks and harms.

“'The risk of extinction from AI is highly speculative and does not compare to the real and immediate global risks humanity faces, such as climate change and the COVID-19 pandemic – real and tangible concerns that governments need to address globally.

“When institutions issue warning statements like this, they risk creating unnecessary panic about future, potential technologies that may never materialise. 

“They also focus people’s attention away from the real risks posed by AI tools today (such as misinformation, bias, lack of transparency, potential for abuse), which are causing real harm.

“The public has been exploring daily the usefulness of AI tools yet are often unaware of the real limitations of these systems and the risks of adopting these emerging technologies.

“Tools that use copyrighted materials without consent, that present false information using a convincing and empathetic tone of voice, that pre-screen job applicants against biased datasets, and that enable image-based abuse – are just some of the real harms we are seeing today. This is where regulation, transparency, and scrutiny are needed urgently.

“AI tools have many benefits to offer humanity, but we need to be critical and careful about how these tools are used for the betterment of society. 

“This requires us to question who has control of these tools, what people and companies may gain (or lose) from how they are used, and what steps we need to take, as a society, to ensure people and companies use these tools appropriately.”

Professor Lisa Given is Professor of Information Sciences and Director of RMIT’s Social Change Enabling Impact Platform. Her research examines people’s use of technology tools for decision-making in business contexts and everyday life.

Professor Matthew Warren, Director, Centre for Cyber Security Research and Innovation 

Topics: cyber security, autonomous weapons, government regulation.

“Doomsday scenarios on AI are nothing new. Remember in 2014, Stephen Hawking warned AI could end mankind? Or in 2014 when Elon Musk warned that, with artificial intelligence, we are summoning the demon? 

“Now the Center for AI Safety claims AI systems could lead to human extinction, it is all hype that takes away from real challenges from global warming, pollution, famine, and more. 

“Will these generative AI models end humanity? No. 

“AI systems should be embraced as they will help improve the societies in many ways and in ways we do not even understand now.

“What they will have a negative impact on society is widespread job losses, disinformation, and lead to the creation of deepfakes.

“The biggest AI risk the world face are how authoritarian countries such as China and Russia will develop and apply AI systems, one of which potentially to develop military applications with autonomous weapons. This is where pressure and controls should be applied.

“Western countries, such as Australia, must develop AI frameworks that will guide the development of AI and identify areas where AI systems will not be developed.  

“The Australian government has now outlined its intention to regulate artificial intelligence, saying there are gaps in existing law and new forms of AI technology will need "safeguards" to protect society based upon a risk approach. 

“This move should be focused and supported. We shouldn’t focus on speculative doomsday scenarios.” 

Professor Matt Warren, Director of the Centre for Cyber Security Research and Innovation and a researcher in Cyber Security and Computer ethics.

Fan Yang, Research Associate, School of Media and Communication

Topics: AI design, technology prejudice, programming.  

“To what extent AI can be risky and impose human extinctions depends on how AI is designed, programmed, supervised, and used.

“Technologies can be very different if humanity/care/environmental sustainability is centred, as opposed to productivity or efficiency, such as ChatGPT. 

“The problem lies in social prejudices, inequalities and injustice that have been embedded and inscribed in the long history of science, technology, and society (which many of us are not aware of until we experienced that our keyboard auto-corrected our name to an English word, Alexa cannot recognise our accents, or our Instagram’s filter reassigned us another race/ethnicity). 

“The headline that AI can cause human extinction is as eye-catching as the risk of technologies is more likely to be disproportionately distributed to the groups of people who are already socially disadvantaged – women, the minor, people of colour, etc. 

“The global pandemic and nuclear wars tell us the same story. 

"AI is part and parcel of capitalism where disposable labour has been historically used for the financial gain of the capitalist and intensifies exploitation and alienation among the groups of people who are already disadvantaged.”

Fan Yang is a Research Associate at RMIT. She specialises on Australia-China relations through technologies, WeChat, AI and Chinese technologies. 

***

General media enquiries: RMIT Communications, 0439 704 077 or news@rmit.edu.au

Share

Related News

aboriginal flag
torres strait flag

Acknowledgement of Country

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Sentient' by Hollie Johnson, Gunaikurnai and Monero Ngarigo.