Five Areas to Consider About the Use of AI in School

AI is already part of school life. You might not have a strategy yet or even a shared view across your staff team but someone’s probably tried it to plan a lesson. Someone else has tested it out for an administrative task (and maybe that person is you!) For many schools, it’s crept in quietly and that’s often how these things start.

As a school leader, you may need to take a moment to think. What’s actually happening in your setting? Where are the risks? What might be worth exploring before it all beds in unnoticed?

This post offers five thought-provoking prompts drawn from Honeyguide’s AI Use and Ethics Toolkit Pack – a practical, editable pack to help school leaders explore risks, responsibilities and leadership decisions across school life.

1. Pupil use of AI: Are we ignoring something that’s already happening?

Most AI discussions in schools focus on staff but pupils are using it too, especially at home. They might be typing prompts into tools like ChatGPT to summarise a homework text or rewrite an answer in more ‘teacher-ish’ language – yes, even in primary. 

But does this mean pupil use of AI is damaging? What about when they use it as a thinking partner to better understand tricky concepts or as a tool to promote independence?

Here are some thoughts to consider:

  • Is there a risk that our pupils will become over-reliant or use AI to shortcut thinking?
  • Are we aware that AI tools can generate biased, inaccurate or age-inappropriate content, which, in turn, can affect pupils’ understanding?
  • Could safe, structured AI use be built into lessons or homework to help our pupils use it effectively?
  • How could we build pupils’ digital discernment and critical thinking, regardless of whether they currently use AI?

2. AI and SEND: Could it support personalisation or flatten the nuance?

Some leaders are exploring whether AI could help generate first drafts of social stories, emotional scripts or visual schedules. In settings where pupils need clear, repeated routines — especially early years or specialist provision — this could save time and provide a useful starting point.

But SEND support is rarely that straightforward. AI can’t know what tone feels right for an individual child, and there could be data protection implications if a staff member enters personal data about a specific pupil. Images created by AI may be too vague or abstract, and while generic scripts might look fine at first glance, they often miss the detail that makes a difference.

Here are some questions to help you reflect:

  • Could AI help reduce the time teachers or SENDCos spend creating basic scripts or visual routines?
  • How do we ensure what’s produced still feels personal, safe and appropriate for each pupil?
  • Might this help us respond more quickly when new needs emerge?
  • What role do visuals and symbols play in our setting and can AI support this effectively?

3. AI and Parent Communication: Should we be upfront about its use?

AI tools are already being used in some schools to help draft letters and emails to parents. Used carefully, this can support consistency and reduce admin time, especially when staff need to communicate clearly and professionally under pressure.

But what happens when parents start to notice? If a message feels too generic or the tone slips, families may question how it was written. Some will be fine with AI involvement but others may see it as impersonal or even inappropriate, particularly in sensitive situations.

Useful questions to consider:

  • Should we tell parents if AI has supported part of a communication and in what circumstances?
  • Would a clear internal stance help staff make good decisions here?
  • How might different parent groups respond to this transparency?
  • What safeguards do we have to make sure the school’s voice and values still come through?

4. Staff oversight: Do we risk being too hands-off or too heavy-handed?

Most staff who are trying AI are doing so with good intentions — to save time, test ideas or ease workload. But without a shared conversation, small individual choices can quickly become whole-school habits with the potential for long-lasting damage.

A good example of this is the use of AI to draft report comments:

  • One teacher uses AI to help draft report comments to get started.
  • Another writes every line manually, working late into the evenings and weekends.
  • Another pastes in well-known generic banks of report statements and asks AI to make comments on each pupil based on a simple grading they give the pupil for effort and attainment.
  • Another feeds AI with full pupil progress data and prompts it to create report comments for every pupil with positives, development points and targets.

Over time, this kind of inconsistency creates problems, not just in tone or quality, but in fairness. Parents are reading these reports about their children and will likely expect them to feel personal, considered and accurate, even though staff are under huge pressure to get them written at all. If some are quietly relying on AI and others aren’t, it raises questions about workload, expectations and trust, as well as possible data protection implications.

Some prompts to explore with your team:

  • Do we know how and where staff are currently using AI and do they feel comfortable sharing it?
  • Would some contexts benefit from clearer expectations or guidance (e.g. using AI to write pupil reports, using AI to plan lessons)?
  • What support do staff need to use AI professionally without feeling monitored?
  • How do we tap into the strengths of staff who are already exploring the use of AI?

5. Leadership and Governance: Who’s actually responsible?

AI use in schools often begins quietly but when things go wrong, questions about oversight come quickly. Who approved that tool? Who checked that message? Who owns the decision?

Without a clear line of accountability, schools risk leaving staff exposed and governors uninformed. If concerns are raised — whether by a parent, staff member or inspectors — ambiguity doesn’t hold up well under scrutiny.

Questions to reflect on as a leadership team:

  • Who currently oversees decisions about AI use across school operations?
  • How are risks reviewed, shared and acted on when needed?
  • Do governors understand the school’s current position and is it written down?
  • Are we prepared to explain how decisions around AI are made and monitored?

What can school leaders do to start a conversation about AI?

You don’t need a full AI strategy to get started but you do need time and space to think.

If these five prompts have sparked reflection, the AI Use and Ethics in Schools Toolkit Pack can help you go further. It includes seven editable checklists covering teaching, SEND, assessment, admin, leadership, ethics and more, with practical use cases, risks and questions to support confident decision-making.

 

Back to blog

Leave a comment

Please note, comments need to be approved before they are published.