RMIT's most recent AI Community of Practice was a wide-ranging discussion on ethical use, cognitive offloading, and a clear appetite for putting pedagogy ahead of technology. It's a telling barometer of how far the AI and education conversation has matured, and it reveals a real tension: the technology keeps advancing, but the foundational questions about how we use it haven't kept pace.
Concerns that are well-founded. In Australia, nearly 80% of students report using AI in their studies (Chung et al., 2026), and a recent OECD report (2026) warns that the conditions are being set for false mastery, where students believe they understand something they have, in reality, outsourced. However, emerging research offers a more hopeful thread: students with strong self-regulation (Mirriahi et al., 2025) and metacognitive skills (Hong et al., 2025) can use AI in ways that genuinely deepen learning.
The question for educators, then, is not whether to permit AI but how to design learning environments that help students use it well.
Lodge and Loble's (2026) report on AI and cognitive offloading poses two questions that cut to the heart of the problem. Does having a fluent, capable AI partner allow students to bypass the effortful thinking learning requires? Moreover, does that offloading free students for higher-order thinking, or prevent the knowledge construction that makes higher-order thinking possible in the first place?
Lodge and Loble draw a crucial distinction between two kinds of cognitive offloading. Beneficial offloading uses AI to reduce unnecessary cognitive burden, freeing students to focus on the thinking that matters. Detrimental offloading uses AI to bypass that thinking altogether. Unstructured use of general-purpose tools like ChatGPT, Claude, or Gemini tends to be detrimental, with sophisticated outputs creating an illusion of competence that students may not even recognise in themselves.
Their answer reframes the challenge entirely. Rather than focusing on preventing AI-assisted cheating, educators need to design learning environments that enable beneficial cognitive offloading, which requires developing students’ metacognition and self-regulated learning capabilities. As they highlight, every task is now effectively a group activity, and, as with any group task, students can either collaborate with their AI partner or let it do all the work.
The core paradox is this: AI is built for speed, while learning is inherently full of friction. Real learning requires students to wrestle with ideas, challenge assumptions, and move back and forth between what they know and what they are encountering. That process should not be shortcut.
Lodge and Loble (2026) sharpen the stakes further: critical thinking depends on domain knowledge. You cannot think critically about something you do not yet understand. Potkalitsky (2025) calls this the knowledge asymmetry problem. Novice learners, lacking the knowledge to evaluate AI outputs, tend to trust them uncritically, stalling the very development they need. Rather than building their own understanding, they continue to rely on AI responses that they are not equipped to interrogate. Meanwhile, students who already possess deep domain knowledge are able to engage with AI critically, using it to extend and refine their thinking. The result is a widening divide, where AI amplifies the capabilities of those who already have them, while quietly eroding the development of knowledge among those who need it most.
Yet this is not an AI problem; fundamentally, it is a learning problem. And it demands a learning-centred response.
Ignoring the AI problem is, at its core, ignoring the learning problem. Attempting to keep students away from AI was never a realistic strategy, and in many cases, that window has already closed. What we risk now is something more serious: leaving students to navigate decisions about their own learning processes with tools they are not equipped to use well, and without the guidance that could make the difference.
General-purpose AI products were not designed with learning in mind. But educators are. Rather than ceding that ground, we can apply our pedagogical wisdom to design AI tools that actively support and foster deep learning; tools built not for speed and convenience, but for thinking.
Tang and Putra (2025) developed a customised AI chatbot for science education built on the principle of treating AI as a dialogic partner rather than an authoritative answer provider. The bot required students to choose a position, provide reasoning, consider opposing viewpoints, and never settle for a simple yes-or-no answer. Student interactions demonstrated four consistent qualities: perspective-taking, reasoning, creative thinking and evidence-based argumentation. Rather than retrieving answers, students were doing the hard work of reflective thought, and these outcomes held across multiple schools.
Li and colleagues (2025) took a complementary approach, using a progressive prompting intervention in which AI scaffolding gradually fades as students' capabilities develop. Guiding students through structured stages, from basic knowledge to analytical reasoning and then to integrative application, the approach led to notable improvements in both learning achievement and critical thinking, along with a significant decrease in extraneous cognitive load.
Together, these studies point to a clear principle: pedagogically designed AI interactions can shift AI from a shortcut to a genuine learning tool.
A practical guide
Martin and colleagues' (2025) Load Reduction Instruction framework offers a practical foundation for designing course-specific AI learning environments that support beneficial cognitive offloading and move students from novice to independent thinkers.
01 Reduce difficulty first: Match the AI's responses and tasks to where students actually are. Introduce complexity gradually; only once core concepts are secure.
02 Scaffold don't solve: Configure AI to guide thinking through questions rather than supply answers. Socratic dialogue and targeted hints keep students doing the cognitive work.
03 Build in structured practice: Design varied, repeated practice opportunities (quizzes, adaptive exercises, iterative dialogue) to help students move knowledge into long-term retention.
04 Make feedback generative: Design AI to go beyond what went wrong, toward what to do differently; prompting students to re-engage rather than simply receive a verdict.
05 Fade the scaffolding over time: Design for independence, not ongoing reliance. As students develop fluency, AI support should step back, shifting toward open-ended problem-solving and practice opportunities.
Before you build
Applying these principles well starts before you open any AI platform. Work through these questions first:
The promise of AI in education is real, but it will only be realised through deliberate, pedagogically grounded design. The tools students reach for are optimised for speed and fluency. Our job is to create the conditions where those tools are used in the service of the slow, effortful, deeply human work of learning.
For educators looking to develop their own practice and that of their students, the following skills from the Gen AI Skills Continuum are a useful starting point:
And for those ready to take the next step in designing course-specific AI learning environments:
Chung, J., Henderson, M., Slade, C., Liang, Y., Pepperell, N., Corbin, T., ... & Matthews, K. E. (2026). The use and usefulness of GenAI in higher education: student experience and perspectives. Computers and Education Open, 100347. https://doi.org/10.1016/j.caeo.2026.100347
Hong, H., Vate-U-Lan, P., & Viriyavejakul, C. (2025). Cognitive offload instruction with generative AI: A quasi-experimental study on critical thinking gains in English writing. Forum for Linguistic Studies, 7(7), 325–334. https://doi.org/10.30564/fls.v7i7.10072
Li, C. -J., Hwang, G. -J., Chang, C. -Y., & Su, H. -C. (2025). Generative AI‐supported progressive prompting for professional training: Effects on learning achievement, critical thinking, and cognitive load. British Journal of Educational Technology, 56(6), 2550–2572. https://doi.org/10.1111/bjet.13594
Lodge J. M. & Loble L. (2026). Artificial intelligence, cognitive offloading and implications for education, University of Technology Sydney, doi:10.71741/4pyxmbnjaq.31302475.
Mirriahi, N., Marrone, R., Barthakur, A., Gabriel, F., Colton, J., Yeung, T. N., ... & Kovanovic, V. (2025). The relationship between students’ self-regulated learning skills and technology acceptance of GenAI. Australasian Journal of Educational Technology. AJET.
Martin, A. J., Collie, R. J., Kennett, R., Liu, D., Ginns, P., Sudimantara, L. B., Dewi, E. W., & Rüschenpöhler, L. G. (2025). Integrating generative AI and load reduction instruction to individualise and optimise students’ learning. Learning and Individual Differences, 121, Article 102723. https://doi.org/10.1016/j.lindif.2025.102723
OECD. (2026). OECD digital education outlook 2026: Exploring effective uses of generative AI in education. OECD Publishing. https://doi.org/10.1787/062a7394-en

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Sentient' by Hollie Johnson, Gunaikurnai and Monero Ngarigo.
Learn more about our commitment to Indigenous cultures