At AI in Education, we listened closely to Bridget Phillipson’s speech at the Global AI Safety Summit. Although the summit focused on global AI risks and governance, her remarks carry particular weight for schools, colleges and those responsible for shaping the future of education.
A central message of the speech was that AI is not neutral. It reflects human choices, values and power structures, and therefore demands clear ethical guardrails. For education, this is a crucial point. Decisions about how AI is introduced into classrooms, assessment, safeguarding and administration will shape not only efficiency, but fairness, trust and opportunity for young people.
We strongly welcome the emphasis on safety, accountability and human oversight. These principles sit at the heart of our work through the AiEd Certified Framework. Schools are already under pressure to 'do something' about AI, often without the time, expertise or confidence to act strategically. The risk is fragmented, tool-led adoption that exposes schools and learners to ethical, safeguarding and reputational risks.
The AiEd Certified Framework was designed precisely to address this challenge. It provides a structured, evidence-informed pathway for schools and colleges to embed AI safely, ethically and effectively, rather than reactively. Importantly, it keeps humans in the lead: professional judgement, pedagogical intent and institutional values come first, with AI acting as a support rather than a driver.
The Education Secretary’s focus on shared standards and collaboration also resonates strongly with our approach. The framework encourages schools to think system-wide - across leadership, staff, learners, policies, data and community engagement - rather than treating AI as a standalone technology issue. This mirrors the wider message of the summit: that meaningful AI safety cannot be achieved in isolation.
Another important thread in the speech was public trust. In education, trust is fundamental. Parents, learners and communities need confidence that AI is being used transparently, responsibly and in learners’ best interests. Through certification, schools can demonstrate this commitment clearly and credibly, showing how their use of AI aligns with national priorities around safety, inclusion and accountability.
Ultimately, Bridget Phillipson’s speech reinforces what we see every day in schools: AI is already here, but confidence is uneven and guidance is often abstract. The opportunity now is to bridge national ambition with practical, school-led action.
By grounding AI adoption in clear principles, like those articulated at the Global AI Safety Summit, and embedding them through frameworks such as AiEd Certified, we can move from anxiety to assurance. If we get this right, AI can strengthen trust in education rather than undermine it, and support a future where innovation and ethics go hand in hand.