Will new AI Safety Standards actually keep us safe?

Will new AI Safety Standards actually keep us safe?

The federal government has released the Voluntary AI Safety Standard to provide guidance to organisations affected by emerging technology. It’s also considering mandatory guardrails for AI in high-risk settings. RMIT experts are available to comment.

Kok-Leong Ong, Professor of Business Analytics

“We know it’s important to implement safeguards as AI becomes more common – it's definitely a move in the right direction. 

"However, these new voluntary measures may not be suitably effective, while mandatory measures may just add another level of red tape. 

“The voluntary approach means some companies may be selective about how they apply the safeguards. And it’s ambiguous, leaving a business to assess risk. 

“Another big issue relates to workforce, which is just not adequately trained to implement the proposed AI safeguards right now. 

“There is also the balance between safety and speed for customers. For example, requiring disclosure of AI use may disrupt current effective credit card application processes.”

Kok-Leong Ong is a Professor of Business Analytics in the College of Business and Law at RMIT University. He is the director of the Enterprise AI and Data Analytics Hub research centre.

Lisa Given, Professor of Information Sciences

“A recent survey revealed one third of Australian businesses are using AI without informing employees or customers. The voluntary standards are a welcome interim measure. 

“However, mandatory guardrails are needed to ensure appropriate protections for consumers, employees, and others. They would also align Australia with other jurisdictions, such as the European Union.

“The voluntary standards will help organisations and regulatory bodies take the next step to ensure AI benefits the community. Especially given the rising challenges relating to the transparency in its design, application, and use.

“The government is also looking to provide more clarity about what constitutes ‘high-risk’ AI, which is another critical issue that must be addressed.”

Lisa Given is a Professor of Information Sciences at RMIT University. She is director of RMIT’s Centre for Human-AI Information Environments and the Social Change Enabling Impact Platform.

***

General media enquiries: RMIT Communications, 0439 704 077 or news@rmit.edu.au

aboriginal flag
torres strait flag

Acknowledgement of Country

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Sentient' by Hollie Johnson, Gunaikurnai and Monero Ngarigo.