For most educators, lesson planning and designing learning activities is where AI feels most immediately useful — and for good reason. The blank page problem is real. But the challenge isn't just getting something down on paper. Designing an effective learning experience involves holding a lot of considerations at once: Does this task work towards the intended learning outcomes? Is it scaffolding skill development progressively? Am I giving students the opportunity to apply concepts to real-world problems, or to test assumptions in a safe space? Have I catered for a diversity of abilities, learning styles, and perspectives? Is this genuinely inclusive?
These are not small questions. They are the hallmark of thoughtful, professional teaching practice and they are exactly where AI can help.
Start by giving AI your topic, your learning outcomes, and your student context — who they are, what they already know, what they're working towards. It will generate a structured starting point: a draft plan, a sequence of activities, a set of ideas organised around your objectives. That first draft gives you something concrete to react to, rather than starting from nothing.
But the real value of working with AI goes beyond the initial draft. Once you have something on the page, you can use AI as a thinking partner. Ask it to review your plan against your learning outcomes. Prompt it to suggest ways to make the activity more inclusive, or to offer alternative tasks for different modes of learning. Ask whether the activity provides adequate scaffolding, or how it might be adapted for students who are either struggling or ready to go deeper. AI won't know your students the way you do, but it can surface considerations you might have missed, offer fresh perspectives, and help you stress-test your design before you take it into the room.
There is, however, an important caveat. Research by Chen and colleagues found that when you prompt a base model with something as simple as "generate a lesson plan on X," you tend to get back teacher-centred activities, lots of "the educator presents" and "students listen and discuss," with little room for student agency or meaningful dialogue. This isn't a flaw so much as a reflection of what the model has learned from: its training data skews toward traditional, teacher-led instruction, and without clear direction, that is the pattern it defaults to (Chen et al., 2025).
The fix is in your prompt. To get back something that reflects your actual pedagogical values, you need to be explicit about them. If you want activities that centre student inquiry, say so. If you want students working at higher levels of thinking — analysing, evaluating, creating — use the verbs that signal that intent: compare, critique, justify, break down, synthesise. Frameworks like Bloom's Taxonomy or the SOLO Taxonomy are useful for designing learning and are powerful prompting tools that help you communicate the kind of thinking you want AI to design for.
This is where your professional expertise becomes more important, not less. AI does not know your learners. It does not understand the ways of knowing that are particular to your discipline, or the nuances of what it means to think like a nurse, an engineer, a historian, or a designer. You do. The quality of what AI produces is directly proportional to the quality of the thinking you bring to the prompt. Used well, these tools do not replace pedagogical expertise; they demand it.
The key distinction throughout is this: you are not outsourcing the design. You are using AI to accelerate the first draft, sharpen your thinking, and ensure that the plan you walk away with is more considered, more inclusive, and more fit for purpose than if you had worked alone.