Creating AI environments that support learning

Creating AI environments that support learning

AI is now a fixture in higher education. The question is no longer whether students are using it; it is whether we are designing the conditions for it to support learning rather than replace it.

The two-speed gap in AI and education

RMIT's most recent AI Community of Practice was a wide-ranging discussion on ethical use, cognitive offloading, and a clear appetite for putting pedagogy ahead of technology. It's a telling barometer of how far the AI and education conversation has matured, and it reveals a real tension: the technology keeps advancing, but the foundational questions about how we use it haven't kept pace.  

Concerns that are well-founded. In Australia, nearly 80% of students report using AI in their studies (Chung et al., 2026), and a recent OECD report (2026) warns that the conditions are being set for false mastery, where students believe they understand something they have, in reality, outsourced. However, emerging research offers a more hopeful thread: students with strong self-regulation (Mirriahi et al., 2025) and metacognitive skills (Hong et al., 2025) can use AI in ways that genuinely deepen learning.

The question for educators, then, is not whether to permit AI but how to design learning environments that help students use it well.

AI in the classroom: collaboration or shortcut?

Lodge and Loble's (2026) report on AI and cognitive offloading poses two questions that cut to the heart of the problem. Does having a fluent, capable AI partner allow students to bypass the effortful thinking learning requires? Moreover, does that offloading free students for higher-order thinking, or prevent the knowledge construction that makes higher-order thinking possible in the first place?

Lodge and Loble draw a crucial distinction between two kinds of cognitive offloading. Beneficial offloading uses AI to reduce unnecessary cognitive burden, freeing students to focus on the thinking that matters. Detrimental offloading uses AI to bypass that thinking altogether. Unstructured use of general-purpose tools like ChatGPT, Claude, or Gemini tends to be detrimental, with sophisticated outputs creating an illusion of competence that students may not even recognise in themselves.

Their answer reframes the challenge entirely. Rather than focusing on preventing AI-assisted cheating, educators need to design learning environments that enable beneficial cognitive offloading, which requires developing students’ metacognition and self-regulated learning capabilities. As they highlight, every task is now effectively a group activity, and, as with any group task, students can either collaborate with their AI partner or let it do all the work.

The confidence trap

The core paradox is this: AI is built for speed, while learning is inherently full of friction. Real learning requires students to wrestle with ideas, challenge assumptions, and move back and forth between what they know and what they are encountering. That process should not be shortcut.

Lodge and Loble (2026) sharpen the stakes further: critical thinking depends on domain knowledge. You cannot think critically about something you do not yet understand. Potkalitsky (2025) calls this the knowledge asymmetry problem. Novice learners, lacking the knowledge to evaluate AI outputs, tend to trust them uncritically, stalling the very development they need. Rather than building their own understanding, they continue to rely on AI responses that they are not equipped to interrogate. Meanwhile, students who already possess deep domain knowledge are able to engage with AI critically, using it to extend and refine their thinking. The result is a widening divide, where AI amplifies the capabilities of those who already have them, while quietly eroding the development of knowledge among those who need it most.

Yet this is not an AI problem; fundamentally, it is a learning problem. And it demands a learning-centred response. 

Designing AI that teaches students to think

Ignoring the AI problem is, at its core, ignoring the learning problem. Attempting to keep students away from AI was never a realistic strategy, and in many cases, that window has already closed. What we risk now is something more serious: leaving students to navigate decisions about their own learning processes with tools they are not equipped to use well, and without the guidance that could make the difference. 

General-purpose AI products were not designed with learning in mind. But educators are. Rather than ceding that ground, we can apply our pedagogical wisdom to design AI tools that actively support and foster deep learning; tools built not for speed and convenience, but for thinking.

Tang and Putra (2025) developed a customised AI chatbot for science education built on the principle of treating AI as a dialogic partner rather than an authoritative answer provider. The bot required students to choose a position, provide reasoning, consider opposing viewpoints, and never settle for a simple yes-or-no answer. Student interactions demonstrated four consistent qualities: perspective-taking, reasoning, creative thinking and evidence-based argumentation. Rather than retrieving answers, students were doing the hard work of reflective thought, and these outcomes held across multiple schools.

Li and colleagues (2025) took a complementary approach, using a progressive prompting intervention in which AI scaffolding gradually fades as students' capabilities develop. Guiding students through structured stages, from basic knowledge to analytical reasoning and then to integrative application, the approach led to notable improvements in both learning achievement and critical thinking, along with a significant decrease in extraneous cognitive load.

Together, these studies point to a clear principle: pedagogically designed AI interactions can shift AI from a shortcut to a genuine learning tool.  


A practical guide

Designing AI Learning Environments: A Framework

Martin and colleagues' (2025) Load Reduction Instruction framework offers a practical foundation for designing course-specific AI learning environments that support beneficial cognitive offloading and move students from novice to independent thinkers.

Five design principles

01 Reduce difficulty first: Match the AI's responses and tasks to where students actually are. Introduce complexity gradually; only once core concepts are secure.

02 Scaffold don't solve: Configure AI to guide thinking through questions rather than supply answers. Socratic dialogue and targeted hints keep students doing the cognitive work.

03 Build in structured practice: Design varied, repeated practice opportunities (quizzes, adaptive exercises, iterative dialogue) to help students move knowledge into long-term retention.

04 Make feedback generative: Design AI to go beyond what went wrong, toward what to do differently; prompting students to re-engage rather than simply receive a verdict.

05 Fade the scaffolding over time: Design for independence, not ongoing reliance. As students develop fluency, AI support should step back, shifting toward open-ended problem-solving and practice opportunities.


Before you build

Questions worth asking first

Applying these principles well starts before you open any AI platform. Work through these questions first:

  • Is there a genuine learning purpose? Can you articulate exactly what cognitive work this AI environment is designed to support? If the answer is vague, the design will be too.
  • Does it scaffold or solve? Will students think harder because of this tool, or less?
  • Does it build toward independence? Is there a point at which the AI steps back, or does the design encourage ongoing reliance?
  • Is this the right problem to solve? Sometimes redesigning an assessment is a better investment than building an AI tool to navigate a flawed one.
  • Does something like this already exist? Before building, check whether a well-designed tool is available or whether a colleague is already working on the same problem.
  • How will you know if it works? Build in a way to evaluate impact from the start, and make sure you are measuring learning, not just performance.
  • How will you guide students to use it? Students need to understand what the tool is for and what it is not for.
  • Does it include metacognitive prompting? If the AI is not asking students to reflect on their own thinking, redesign it. That reflection is not a nice addition, it is the point.

Designing AI that teaches students to think

The promise of AI in education is real, but it will only be realised through deliberate, pedagogically grounded design. The tools students reach for are optimised for speed and fluency. Our job is to create the conditions where those tools are used in the service of the slow, effortful, deeply human work of learning.

For educators looking to develop their own practice and that of their students, the following skills from the Gen AI Skills Continuum are a useful starting point:

And for those ready to take the next step in designing course-specific AI learning environments:

References

Chung, J., Henderson, M., Slade, C., Liang, Y., Pepperell, N., Corbin, T., ... & Matthews, K. E. (2026). The use and usefulness of GenAI in higher education: student experience and perspectives. Computers and Education Open, 100347. https://doi.org/10.1016/j.caeo.2026.100347

Hong, H., Vate-U-Lan, P., & Viriyavejakul, C. (2025). Cognitive offload instruction with generative AI: A quasi-experimental study on critical thinking gains in English writing. Forum for Linguistic Studies, 7(7), 325–334. https://doi.org/10.30564/fls.v7i7.10072

Li, C. -J., Hwang, G. -J., Chang, C. -Y., & Su, H. -C. (2025). Generative AI‐supported progressive prompting for professional training: Effects on learning achievement, critical thinking, and cognitive load. British Journal of Educational Technology, 56(6), 2550–2572. https://doi.org/10.1111/bjet.13594

Lodge J. M. & Loble L. (2026). Artificial intelligence, cognitive offloading and implications for education, University of Technology Sydney, doi:10.71741/4pyxmbnjaq.31302475.

Mirriahi, N., Marrone, R., Barthakur, A., Gabriel, F., Colton, J., Yeung, T. N., ... & Kovanovic, V. (2025). The relationship between students’ self-regulated learning skills and technology acceptance of GenAI. Australasian Journal of Educational Technology. AJET.

 Martin, A. J., Collie, R. J., Kennett, R., Liu, D., Ginns, P., Sudimantara, L. B., Dewi, E. W., & Rüschenpöhler, L. G. (2025). Integrating generative AI and load reduction instruction to individualise and optimise students’ learning. Learning and Individual Differences, 121, Article 102723. https://doi.org/10.1016/j.lindif.2025.102723 

OECD. (2026). OECD digital education outlook 2026: Exploring effective uses of generative AI in education. OECD Publishing. https://doi.org/10.1787/062a7394-en

13 April 2026
aboriginal flag float-starttorres strait flag float-start

Acknowledgement of Country

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Sentient' by Hollie Johnson, Gunaikurnai and Monero Ngarigo.

Learn more about our commitment to Indigenous cultures