Blog by: Daniel Manning (Lead Generative AI Experience) and Dale Leszczynski (Head of AI – Education)
The question we get asked most when introducing the Generative AI Lab for Education is: so where is this "lab"? Can I touch it? Where are the Bunsen burners and test tubes?
Fair question. But the reality Is the lab isn't a room. It's a methodology and pipeline for putting ideas about AI in education to the test, facilitated by people with expertise in pedagogy, software engineering, and design. The Bunsen burners are the AI tools we have access to at RMIT. The test tubes are real classrooms with real students. And the thing we’re experimenting is signal: what are the most satisfying, defensible, and "right fit" AI interventions for teaching and learning?
Our sharp focus for 2026: The Human + AI Curriculum Pipeline.
If you're at RMIT, you've almost certainly been exposed to ideas for how AI can help. Maybe it's automating the tedious parts of designing lecture slides. Maybe it's stress-testing your assessments. Maybe you’ve built something clever with an off the shelf AI model or RMIT’s internal AI Platform VAL that simplified a genuinely complex process.
Some members of our community have gone well past our headlights. They’ve built highly impactful workflows on top of tools like Copilot and VAL driven by a need or problem relevant to them.
Good work is happening everywhere. In pockets, in passion projects, in quiet corners of colleges. The opportunity is to connect it. To give it shared evidence, coordination, and the pedagogical grounding needed to maintain and accelerate the student experience.
GAILE is on a mission to navigate this terrain, and provide a map, because there's an institution-wide backseat full of drivers who all have something to say.
GAILE has been set up to be the connective tissue. We want to meet these initiatives wherever they sit, from "I made something cool on the weekend" through to "I think AI can solve a problem here" and usher them into a shared, collaborative methodology. We have the expertise to ground innovations in pedagogy. We are building the infrastructure to connect people who are unknowingly solving the same problems in parallel. And we won't compromise on ethical and responsible AI use.
Here's how a pilot plays out through GAILE's five-stage methodology for AI/Edu experiments:
An idea surfaces, from the community or GAILE. This stage is about intuitive sense-checking and human conversation. We'll coach you on how to safely experiment locally, surface pre-existing tools and capabilities you may not know about, and work toward a shared understanding of the best path forward. There may already be other people piloting something similar. AI can't solve everything, and we probably wouldn't want it to. The output here is a recommendation: is this worth pursuing, and if so, how?
When there's genuine appetite, we co-create a pilot brief. The goals are to de-risk the project early and synthesise a plan. We address the riskiest assumptions up front: Is this even possible with today's technology? Do enough willing participants exist? Can we conceive of a realistic solution within RMIT right now? Is this aligned to RMIT’s strategic ambition and your own local context.
If this stage is successful, we have a proposal, and we begin working with learning and teaching leaders across the university to engage the right participants and bring them on board.
Now we get specific. What are we building, and how will we know if it works? We plan the experiment in enough detail to move confidently, without over-engineering it. If there's a tool or interface involved, we design it with the people who'll use it. We also figure out what we're measuring and how we'll ask for consent. The output: designs, a plan, and a clear methodology.
This is where the idea becomes real. We run the pilot, support the people involved, collect data, and iterate. Things will break, things will surprise us, and that's the point. The output: results, and a working version of the thing that's been shaped by actual use.
This is where the lab earns its name. Whatever was learned or produced gets documented with provenance, positive findings, negative findings, things that surprised us. The experiment should be repeatable and should survive the current team (as much as possible in this fast moving space) The output: defensible insights, a complete package of everything collected and produced, and a recommendation for what comes next.
“AI in education” has a repeatable failure pattern: good local experiments create stories, but they don’t reliably create defensible insight the institution can build on.
When we looked across real-world pilots (and the broader evidence base), failure modes are rarely a lack of effort or a lack of interesting ideas; failure modes are usually around clear intentions, methodical design and clear hypotheses, convenience-driven measurement, missing baselines, weak governance, and confounding effects that look like impact.
Our pipeline is intentionally designed as a set of guardrails against the most common validity killers:
Here are a few of the external studies/case write-ups captured:
Importantly, the last stage (evaluate + archive) is not optional. Institutions fail to compound learning when results stay tacit or trapped in one team. By packaging methods, measures, outcomes, and context (including negative or mixed findings), we build the map the institution can reuse and trust.
The majority of GAILE's pipeline is designed to operate upstream of institutional AI governance not outside it. Our Lab arm validates ideas at low cost and low risk, using existing tools and real classrooms, so that when an intervention is ready to scale, it arrives at the institutional decision point with evidence, methodology, and a clear recommendation rather than just enthusiasm.
The curriculum pipeline touches almost every aspect of how a course gets designed, delivered, and improved, and this is reflected in the kinds of interventions the community has been most drawn to. On the design side, there's strong interest in tools that help educators get started: generating learning outcome drafts from course intent, building course maps that align weeks, activities, and assessments to those outcomes, and producing assessment brief starters that cover task design, constraints, submission format, and marking approach.
There's also appetite for tools that support course delivery and student experience. This includes explainer builders that break down difficult concepts with examples and anticipated misconceptions, case study generators that create authentic scenarios with embedded prompts and data, and blended learning agents that review course material and develop in-class signposting to support cohesive delivery.
Identifying where AI can help and where should humans stay responsible
Conceptual diagram titled “Human // AI Curriculum Pipeline” illustrating how human expertise and AI support interact across curriculum design. On the left, human perspectives are shown through statements such as knowing disciplinary standards, valuing the learning journey, maintaining integrity, and understanding student needs. In the centre, four goals are presented: starting a draft, improving work, testing quality and integrity, and adapting for context. On the right, corresponding AI-supported prompts are shown, such as generating drafts, aligning to learning outcomes, stress-testing for bias, and creating variations for diverse learners. A visual gradient at the bottom transitions from “Human Agency” to “AI Enabled Support,” indicating a balance between human responsibility and AI assistance.
Feedback and integrity are a third thread. Feedback starters that are criteria-linked, strengths-focused, and student-actionable; and integrity stress-tests that surface loopholes, authenticity risks, and AI-vulnerability in assessment design are consistently flagged.
Finally, there's growing interest in making sense of data and making courses more accessible: natural language data explainers that turn course analytics and survey results into plain-language "so what / now what" summaries, and accessibility adapters that adjust reading level, provide alt text, and produce alternative formats and study notes.
Modular tools that assist at any point in the curriculum pipeline
Diagram titled “Human // AI Curriculum Pipeline” showing modular AI tools supporting different stages of the curriculum process. Four stages are represented: Curriculum, Content, Assessment, and Review. Each stage is connected by dotted lines to triangular icons illustrating activities such as building content, interacting with learners, cognitive processing, and feedback. The layout suggests AI tools can assist at any point across the curriculum pipeline
Capability is at the centre of everything we do. Defensible insights are what lead to foundational, teachable lessons that cut through noise and withstand shifts in popularity.
We're building toward a central gallery of the work RMIT community members are doing, have done, and will do including all crowdsourced and discovered insights so that recognition, authorship, and identity are upheld.
If any of this resonates, reach out: gaile@rmit.edu.au
Main header image made using generative AI. The prompt for this image was “ A Rube Goldberg–style machine where a variety of inputs enter one end and travel through interconnected stages — being shaped, tested, stress-tested, annotated with feedback — before emerging as polished, documented outputs with attribution tags attached. The machine is part digital (screens, data streams) and part analogue (hands adjusting dials, pencils marking up drafts). Detailed, playful line-art style with selective colour highlights. Firefly Image 4, upscaled,.

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Sentient' by Hollie Johnson, Gunaikurnai and Monero Ngarigo.
Learn more about our commitment to Indigenous cultures