When Google took the stage at BETT 2026, the message to educators was clear: AI should help students think and write more effectively, while relieving some of the administrative load that quietly dominates a teacher’s week. The newly announced Gemini and Google Classroom updates are framed precisely in those terms. 

On paper, the offer is attractive. Gemini will be able to work directly with live Classroom data, summarising student progress, drafting assignments and preparing resources that match the actual learners on your rolls rather than an imagined average student. A redesigned Classroom homepage promises to surface key engagement information for leaders, class‑level insights for teachers, and priorities for students. Native audio and video tools will allow richer instructions and feedback without leaving the platform, while learning standards tagging aims to help teachers and systems see at a glance how tasks map to curriculum frameworks, including those used in Australia. 

For a busy teacher, it is tempting to see these features as a long‑overdue co‑teacher who never tires of drafting, redrafting and summarising. That framing, however, deserves careful scrutiny. There is a significant difference between support that frees time for deeper curriculum thinking, and automation that quietly narrows that thinking to what is easiest for a model to generate.

One practical risk is that planning drifts towards the defaults of the platform. If Gemini can produce a ready‑made assignment sequence, complete with suggested prompts and automated feedback, it will be very easy to accept those structures with minimal modification, especially late on a Sunday evening. Over time, that may lead to an invisible standardisation of assessment tasks and classroom discourse, particularly in schools that rely heavily on the same tools.

A second concern is data literacy. Using real Classroom context means that the model is working with student work samples, engagement patterns and possibly sensitive indicators. Teachers will need clear guidance from systems about what data is shared, how it is processed and under what governance arrangements. Professional judgement about when not to feed something into Gemini will matter as much as knowing how to phrase a prompt.

At the same time, these tools can open genuinely valuable possibilities. Thoughtful use of Gemini to generate first‑draft scaffolds, alternative explanations or reading‑level adjustments can make it easier to differentiate without extending planning into the late evening. The ability to tag tasks against standards and then view patterns over time may support more strategic discussions in faculties about where gaps and overlaps sit in a programme of learning.

The key, as with most AI developments, lies in agency. If teachers treat Gemini as a junior planning assistant whose work must always be reviewed, adapted and occasionally rejected, there is real potential to reclaim time and focus attention on the aspects of teaching that only humans can do. If, instead, the system quietly becomes the default author of tasks and feedback, the profession may find itself working inside someone else’s conception of what “good enough” looks like.

The question for each school, therefore, is not only whether to turn these features on, but also how to frame them in staffrooms and classrooms: as tools to extend human judgment, or as shortcuts that risk replacing it.