What I have learnt this week – ChatGPT Projects, NotebookLM and the Ethics of AI

This week’s focus has been on two powerful applications for using AI in education —ChatGPT Projects and NotebookLM—and the absolutely essential ethical blueprint that must guide their deployment, especially in the sensitive area of Special Educational Needs and Disabilities (SEND) and Social, Emotional, and Mental Health (SEMH).


1. The Game-Changers: Analysing SEMH Data with AI

I spent the start of the week exploring how two different tools can tackle the SENCO’s biggest administrative burden: making sense of vast, disparate qualitative data (incident reports, timelines, specialist advice, etc.). I quickly realised that these tools aren’t just about efficiency; they’re about enhancing the quality and consistency of our support.

Lesson 1: ChatGPT Projects for a Private, Central Brain

My first insight, which I detailed in my post on Revolutionise Your SENCO Workflow: Analysing SEMH Data with ChatGPT Projects, was understanding the secure deployment model. Using ChatGPT Projects with the “Project Only”setting is the key. This creates a secure, dedicated analysis environment—a ‘private LLM‘—that only interrogates the specific documents I upload.

This means I can:

  • Centralise Pupil Data: Upload a pupil’s entire history (reports, timelines, school TPP manuals) into one place.
  • Decentralise Support: Use the share function to instantly provide tailored, evidence-based briefings for new TAsand instant strategies for class teachers based on that pupil’s history and our school’s framework.
  • Prioritise Strategically: Ask the AI to analyse recurring themes across the data to help the SLT prioritise the most necessary elements for staff training, ensuring our CPD budget hits the precise needs of our current cohort.

Lesson 2: NotebookLM for Referenced, Real-Time Advice

I then explored NotebookLM, which impressed me with its simplicity and commitment to source verification. As a free tool, it’s a brilliant option for schools already using Google Workspace. I documented this process in my piece, How NotebookLM can Revolutionising Qualitative SEMH Data Analysis in UK Schools.

The real power here lies in its ability to act as a research assistant that doesn’t just summarise, but references its findings. I learned that by uploading documents like behaviour logs alongside our Trauma Perspective Practice (TPP) manual, the tool can:

  • Identify a pupil’s top three triggers instantly.
  • Provide immediate support strategies using the consistent, trained language of the TPP framework (e.g., “window of tolerance”).
  • Crucially, every piece of advice is linked back to the source document. This builds trust and accountability among staff, as they can verify the information directly.

2. My Guide to the Non-Negotiable: AI Ethics

After seeing the power of these tools, I realised the second, and arguably most important, part of the week’s learning was synthesising the principles of AI ethics. Deploying this technology with vulnerable children means our moral and legal responsibilities are amplified. AI in education must be ethical education.

My key takeaways on the essential ethical framework, which I covered in my Guide to AI Ethics: From Beginner to Expert, are:

The Five Pillars of Responsible AI

  1. Non-Maleficence (“Do No Harm”): This is the foundation. We must proactively risk-assess and mitigate any harm—physical, psychological, or societal—that our systems could cause. For us, this means preventing the AI from generating manipulative or counter-productive advice.
  2. Accountability: We can’t let the technology be a scapegoat. I need clear policies defining who is responsible when an algorithmic error leads to harm—the developer, the operator (me, the SENCO, or the school), or the end user.
  3. Transparency and Explainability: Trust relies on knowing how and why AI is used. Transparency means clearly labelling when AI is involved. Explainability means ensuring that staff and parents can understand the AI’s reasoning, making it possible to challenge incorrect advice.
  4. Fairness and Non-Discrimination: I learned that algorithmic bias is a serious risk. If the historical data fed into the AI reflects societal biases (e.g., gender or socio-economic), the AI will reproduce and amplify them. We must design for equity and rigorously audit our data to prevent this.
  5. Human Rights: Ultimately, AI is not above the law. Our usage must fully respect fundamental rights, particularly Privacy (given the sensitivity of SEMH data) and Non-Discrimination.

My Reflection: Steering the Ship 🧭

The move towards AI in education is not a question of if, but how. I now feel confident that we can leverage the analytical power of tools like ChatGPT Projects and NotebookLM to deliver rapid, consistent, and evidence-based support for every pupil.

However, the speed of innovation must be matched by the robustness of our ethical commitment. By prioritising accountability, transparency, and fairness, we ensure that AI remains a force for good, elevating our professional capacity without compromising the dignity or security of our pupils.

Leave a comment