Put the Pitchforks Down

Put the Pitchforks Down

On tribalism, AI shaming, and why the best conversations happen when we stop performing our positions

At a recent sector event focused on AI in education, an academic made a case for returning to pen-and-paper exams. Genuine concern, held with conviction. The response from parts of the room? Eye rolls. Visible disdain. A quiet but unmistakable dismissal. 

You can agree or disagree with that argument. But the reaction was the most shocking. Because somewhere along the way, the AI conversation in higher education stopped being a conversation and started being a performance. Pick a side. Signal your tribe. And whatever you do, don’t get caught asking a question that might put you on the wrong team. A colleague of mine continually tells me that this has more or less worked itself out in creative fields: if you use Generative AI in your artistic endeavours you’re a pariah, and those who abdicate any GenAI use appear to occupy a certain echelon of the society they’ve developed – but that conversation is much more grey than it is black and white when you talk to practitioners and changes with every news headline or model release. 

Louie Giray calls this “AI shaming” in a 2024 paper in the Annals of Biomedical Engineering. He identifies how academics who use AI get criticised, dismissed, or made to feel their work is less legitimate. He profiles the “traditionalists” and “elitists” who gatekeep what counts as real scholarship. It’s a useful framework. But AI shaming goes both ways. The eye rolls at that event? That’s shaming too. Dismissing a colleague’s concerns about invigilation because you’ve personally moved past the debate is having 2023’s argument in 2026, just from a different direction. 

There is a certain amount of reductionism occurring in this argument, because  whether we like it or not Generative AI has the potential to make some areas of our lives much easier, and others increasingly hard: the target keeps moving. Nobody has the full picture. People on both sides are routinely and confidently sharing misinformation, not out of malice, but because staying properly informed is almost a full-time job. What was correct yesterday has been made redundant by new releases or information and anything robust is based on technology that is two or three release cycles behind. There’s a strange irony in the fact that shame (which is a deeply human emotion) has become one of the dominant forces shaping how we talk about a technology built on pattern recognition. 

Which is why the conversations we’ve been having in early 2026 have been so encouraging. GAILE recently ran five capability-building sessions across RMIT, working with educators at every point on the AI Skills Continuum. The rooms were full of people with wildly different levels of experience and opinion. And the quality of dialogue was remarkable. 

In one session, a poll revealed someone felt AI had not yet disrupted their discipline. On certain corners of the internet, that’s a pile-on waiting to happen. In the room, what followed was a measured conversation about what disruption looks like and where privacy concerns fit in. The initial instinct may have been “how could you think that?” but what came out was curiosity. People signalling, “I may not fully understand your perspective, but explain it to me.” That’s the difference between a conversation and a performance. It’s harder to be tribal when you’re sitting across from someone, not hiding behind a keyboard. It’s the work of good humans doing good work.  

We’ve also seen what happens without that kind of space. A recent pilot, an AI initiative, some participants pulled out, not because the idea or thesis was wrong, but because the people involved didn’t feel like they had enough information and knowledge to navigate the curly questions that were inevitably going to come up, and what this might mean for second- and third-order effects. The tribalism there wasn’t pro-AI versus anti-AI. It was a gap between those ready to move (had an internal thesis of risk over reward) and those who needed time to think. Without a safe space for that tension, caution won, taking a thoughtful, responsible approach won. That is a signal of what we need to do to prepare people to ensure there is enough safety to build in  

At GAILE, we’re pro responsible AI: an easy statement to make. What it looks like is holding space for disagreement, discomfort, and the kind of slow, honest dialogue that doesn’t make for great social media posts (although I hold out hope for a slow version of TikTok) but shifts practice. It means building rooms where someone can advocate for pen-and-paper exams without being dismissed, and champion AI-assisted assessment without being shamed. Where students can be involved in the conversation, so the conversation is about the steps community takes regarding how AI shows up in learning and teaching.  

So, if you’re tired of the tribalism, the eye rolls, the hot takes, the ‘you just don’t get it’ energy come talk to us. We’re building those rooms. Pitchforks optional, but we’d prefer you left them at the door. 

05 March 2026

More GAILE blogs

aboriginal flag float-start torres strait flag float-start

Acknowledgement of Country

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Sentient' by Hollie Johnson, Gunaikurnai and Monero Ngarigo.

Learn more about our commitment to Indigenous cultures