Australian media need generative AI policies to help navigate misinformation and disinformation

Australian media need generative AI policies to help navigate misinformation and disinformation

New research into generative AI images shows only over a third of media organisations surveyed at the time of research have an image-specific AI policy in place.

The study, led by RMIT University in collaboration with Washington State University and the QUT Digital Media Research Centre, interviewed 20 photo editors or related roles from 16 leading public and commercial media organisations across Europe, Australia and the US about their perceptions of generative AI technologies in visual journalism.   

Lead researcher and Senior Lecturer, Dr TJ Thomson, said while most staff interviewed were concerned about the impact of generative AI on misinformation and disinformation, factors that compound the issue, such as the scale and speed at which content is shared on social media and algorithmic bias, were out of their control. 

“Photo editors want to be transparent with their audiences when generative AI technologies are being used, but media organisations can't control human behaviour or how other platforms display information,” said Thomson, from the School of Media and Communication. 

“Audiences don’t always click through to learn more about the context and attribution of an image. We saw this happen when AI images of the Pope wearing Balenciaga went viral, with many believing it was real because it was a near-photorealistic image shared without context. 

“Photo editors we interviewed also said images they receive don’t always specify what sort of image editing has been done, which can lead to news sites sharing AI images without knowing, impacting their credibility.” 

Thomson said having policies and processes in place that detail how generative AI can be used across different communication forms could prevent incidents of mis- and disinformation, such as the altered images of Victorian MP Georgie Purcell, from happening.  

“More media organisations need to be transparent with their policies so their audiences can also trust that the content was made or edited in the ways the organisation says it is,” he said. 

A person scrolling on their phone at night in the backseat of a car. While most staff interviewed for the study were concerned about the impact of generative AI on misinformation and disinformation, factors that compound the issue were out of their control. Credit: Adobe Stock

Banning generative AI use not the answer 

The study found five of the surveyed outlets barred staff from using AI to generate images, and three of those outlets only barred photorealistic images. Others allowed AI-generated images if the story was about AI. 

“Many of the policies I’ve seen from media organisations about generative AI are general and abstract. If a media outlet creates an AI policy, it needs to consider all forms of communication, including images and videos, and provide more concrete guidance,” Thomson said. 

“Banning generative AI outright would likely be a competitive disadvantage and almost impossible to enforce. 

“It would also deprive media workers of the technology’s benefits, such as using AI to recognise faces or objects in visuals to enrich metadata and to help with captioning.” 

Thomson said Australia was still at “the back of the pack” when it came to AI regulation, with the US and the EU leading. 

“Australia’s population is much smaller, so our resources limit our ability to be flexible and adaptive,” he said. 

“However, there is also a wait-and-see attitude where we are watching what other countries are doing so we can improve or emulate their approaches. 

“I think it’s good to be proactive, whether that’s from government or a media organisation. If we can show we are being proactive to make the internet a safer place, it shows leadership and can shape conversations around AI.” 

Algorithmic bias affecting trust  

The study found journalists were concerned about how algorithmic bias could perpetuate stereotypes around gender, race, sexuality and ability, leading to reputational risk and distrust of media.  

“We had a photo editor in our study type a detailed prompt into a text-to-image generator to show a South Asian woman wearing a top and pants,” Thomson said. 

“Despite detailing the woman’s clothing, the generator persisted with creating an image of a South Asian woman wearing a sari.

“Problems like this stem from a lack of diversity in the training data, and it leads us to question how representative are our training data, and what can we do to think about who is being represented in our news, stock photos but also cinema and video games, which can all be used to train these algorithms.”  

Copyright was also a concern for photo editors as many text-to-image generators were not transparent about where their source materials came from.  

While there have been generative AI copyright cases making their way into the courts, such as The New York Times’ lawsuit against OpenAI, Thomson said it’s still an evolving area. 

“Being more conservative and only using third-party AI generators that are trained on proprietary data or only using them for brainstorming or research rather than publication can lessen the legal risk while the courts settle the copyright question,” he said.  

“Another option is to train models with an organisation's own content and that way, they have confidence they own copyright to resulting generations.” 

A journalist is looking at footage that is being edited on a computer. Researchers say generative AI can also help photojournalists and editors complete more menial tasks, allowing them more time to work on better projects. Credit: Adobe Stock

Generative AI is not all bad

Despite concerns about mis- and disinformation, the study found most photo editors saw many opportunities for using generative AI, such as brainstorming and generating ideas. 

Many were happy to use AI to generate illustrations that were not photorealistic, while others were happy to use AI to generate images when they don’t have good existing stock images. 

“For example, existing stock images of bitcoin all look quite similar, so generative AI can help fill a gap in what is lacking in a stock image catalogue,” Thomson said.  

While there was concern about losing photojournalism jobs to generative AI, one editor interviewed said they could imagine using AI for simple photography tasks.

“Photographers who are employed will get to do more creative projects and less tasks like photographing something on a white background,” said the interviewed editor.   

“One could argue that those things are also very easy and simple and take less time for a photographer, but sometimes they’re a headache too.”

Generative Visual AI in News Organizations: Challenges, Opportunities, Perceptions, and Policies” was published in Digital Journalism. (DOI: 10.1080/21670811.2024.2331769) 

TJ Thomson (RMIT University), Ryan Thomas (Washington State University) and Phoebe Matich (Queensland University of Technology) are co-authors. 

Thomson was a visiting fellow at the German Internet Institute in Berlin, which allowed him to complete the European portion of this research. 

 

Story: Shu Shu Zheng

Share

  • Research
  • DSC Research
  • Media & Communication
  • DSC

Related News

aboriginal flag
torres strait flag

Acknowledgement of Country

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Sentient' by Hollie Johnson, Gunaikurnai and Monero Ngarigo.