Agentic thinking: why “just add AI” keeps failing educators?

Agentic thinking: why “just add AI” keeps failing educators?

If you've tried using AI tools and walked away underwhelmed, you're not alone. The promise was efficiency. The reality can often be a chatbot that produced generic content you spent more time fixing than it would have taken to write yourself.

With many conversations about AI bubbles were spawned from a much-maligned 2025 MIT Study “The Gen AI Divide” where the return on investment for AI projects for over 50 companies was 0%.

But, not to stop progress, we have AI agents, generative AI powered tools, that perceive environments, reason, plan and take actions, potentially built on the same flawed thinking of standard AI tools.

The problem isn't the technology. It's how we've been approaching it. 

In 2026, GAILE wants to approach building AI experience and agents with this in mind..

Start with the outcome, not the agent

Most AI implementations begin with the wrong question: "What can the AI do?" This leads to vague assistants that can technically do many things but aren't accountable for anything specific.

So instead of questioning what a new AI tool, automation, super prompt or agent can do, defining the outcome you're wanting to solve is a great place to start. So rather than a generic “help me with course design” aiming for something concrete: a structure course outline with aligned assessments, populated into Canvas (the LMS), ready for refinement, academic judgement and finesse.

When you start with outcome, the conversation changes. It’s 2026, we get that these AI agents are impressive, but in an education context, where time is short and the requirements are large, we need to ask whether it can reliably deliver a specific result.

Understand what it means to be an educator

Most educators didn't sign up to be web designers, video producers, or LMS administrators. But the role has expanded to include all of it. As Richard Nelson notes in Academic Identity in the Age of AI, the role has become 'confused and bloated', filled with demands for 'student administration, programme administration, quality assurance, and academic misconduct'.

Course building now involves content creation, resource curation, accessibility formatting, platform mechanics—work that's essential but can often pull time away from teaching and student interaction that builds value. Before introducing AI into any workflow, understanding where time goes is where there is a value add. Not how the process is supposed to work, but how it works in practice. Where do people slow down? Where do they double-check? Where does expertise matter, and where is it just grinding through mechanics?

The goal is to identify those mechanical tasks and let technology handle them.

AI powered agentic systems shouldn’t totally replace workflows. They intervene in them. We must maintain a 'human-centred approach' where the educator remains the 'subject matter expert guiding students... through the maze of information'. If you don't know where responsibility currently lives, you'll design sething that either oversteps or under-delivers and then goes stale on the software shelf."

Assign the agent a role

Here's where things diverge from typical AI thinking. Where ChatGPT and its peers are good for broad and general intelligence, when designing agentic experiences, design it as a role within the system, rather than as an interface or feature.

For course building, that role is curriculum assistant. Not "AI helper." A curriculum assistant has a specific job: suggest structures, draft outlines, identify alignment gaps, source additional resources. Ask questions about what learning styles you want to adapt, ADDIE, SAM etc. It prepares, recommends, and flags, but it doesn't decide. That sits within the role of the human, and in this case the educator.

This framing does something important. It gives the agent defined inputs it can rely on, outputs it's expected to produce, and limits where it must defer. The assistant reviews your draft and asks questions. It doesn't submit on your behalf. The agency is still distinctly human where it matters.

Humans remain where they matter

Agents may one day become so powerful that autonomous by default is a possibility, but at this stage and with the complexity of education, that's not just impractical, it's the wrong goal.

Some things stay human. Assessment decisions. Student communication around sensitive matters. Pedagogical judgment about what works in a classroom. These aren't tasks to automate. They're where educator expertise is irreplaceable. 

Designing human-in-the-loop isn't a fallback for when the AI fails. It's a first-class design decision. You explicitly define where humans review, where they decide, and how they correct. That clarity is what makes the system trustworthy.

Trust through behaviour, not claims

Nobody trusts an agent because it says it's intelligent. Trust comes from behaviour.

The difference between a chatbot and a genuine assistant shows up in how it engages. A chatbot asks: "What are your learning objectives?" The curriculum assistant might instead ask: "You've got a group project due in week 8, but students first work in teams in week 7. Is that intentional compression, or would earlier low-stakes collaboration help scaffold toward it?"

The second question shows the agent has understood the structure, noticed a potential friction point, and is asking about intent rather than assuming error.

An agent that feels less like a black box and more like a junior collaborator, whose work you can review, challenge and refine, a tool that is useful.

The point

Agentic thinking isn't about making AI more powerful. It's about making it more accountable.

Let’s be honest: GAILE is a brand new entity, and we have some behaviour to exhibit to build that trust. But we hope that in showing this framing, and how we’re approaching our building, you can understand our approach.

We won’t be throwing new chatbots at you, we’ll be defining the outcome first. We’re built on understanding the human workflow, and we are obsessed with keeping humans in charge where it matters.

Because when you approach AI agent building with some thought the question stops being "can the AI do this?" and becomes "is this agent reliably delivering the outcome it's responsible for?"

 

Blog by: Dale Leszczynski | Head of AI – Education

02 March 2026

More GAILE blogs

aboriginal flag float-start torres strait flag float-start

Acknowledgement of Country

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Sentient' by Hollie Johnson, Gunaikurnai and Monero Ngarigo.

Learn more about our commitment to Indigenous cultures