Part 1: Safeguarding and the age of AI
Artificial Intelligence (AI) is no longer confined to tech labs or Silicon Valley boardrooms. It’s crept on us quickly and now it’s in our classrooms, our homes, and increasingly, in the lives of young people. Whether we like it or not, AI is shaping the risks children face and the ways we respond to them.
Why safeguarding professionals can’t afford to ignore AI
The speed of change is staggering. Children are using AI tools to help with homework, generate content, and even interact with AI-powered chatbots, some of which are designed to simulate friendship, offer advice, or mimic human empathy. In parallel, there are people out there that are using AI to generate deepfakes, automate grooming tactics, and exploit algorithmic loopholes to spread harmful content.
If professionals working in education and child protection more widely aren’t engaging with AI, then we are already behind. The danger isn’t just in the tools themselves, but in a lack of understanding about how they’re influencing behaviour, identity, and risk.
We are entering an era where:
AI-generated abuse imagery is almost impossible to trace using traditional methods;
Children may be influenced or harmed by systems that aren't accountable or transparent;
Safeguarding systems themselves might use AI, raising questions about data ethics, fairness, and unintended consequences.
This isn’t tomorrow’s challenge, it’s already here. The risk is that, without professional curiosity and oversight, the tools we don’t fully understand will start to shape the environments we are responsible for keeping safe. This doesn’t mean we all need to start developing technical knowledge of AI, which is often what puts us off. Just like when we teach young people PSHE, it’s not for educators to have an in-depth detailed understanding of the issue, but to have an overview, in lay terms, of a topic - its impact and how to manage it.
So where do we begin?
Over this five-part blog series, we’ll explore the double-edged nature of AI in safeguarding: its power to protect, and its potential to harm. We’ll cover how it’s being used (and misused), what ethical and legal frameworks apply, and how professionals can respond with confidence and clarity.
Safeguarding in the age of AI demands not just a working knowledge of it, but professional courage. It’s not about becoming tech experts, it’s about ensuring that human values lead technological progress, not the other way around.