Part 4: Mitigating the risks of AI
Exposure to risk is part of life. It’s always important to acknowledge this when discussing risks and how to mitigate them - it would be naive to think that we can prevent young people's exposure to the potential harms of AI. So the key questions are:
How significant is the risk?
How likely is the risk of occurring?
What can we do to reduce the significance of the potential impact and the likelihood of it occurring?
Given that prevention ought to be assumed impossible, how can we equip young people to deal with a harmful experience when it occurs?
It’s possible to engage young people in the above process - to pose a potential risk (for example, exposure to algorithms that create echo chambers) and ask them to consider each question, as part of a carefully planned session or series of sessions. In fact, it is incredibly powerful to take this approach with many (not all) young people as it places the ownership and problem solving thought process on them.
Roles and responsibilities
We need to teach young people to be autonomous risk assessors, but this requires support and guidance. Who are the role players in supporting and protecting young people from AI and online harms more generally?
Organisations (search services and services that allow users to post content online or to interact with each other, e.g. social media platforms). Until recently, the reality was that such organisations wouldn’t do nearly enough. There were some safeguards in place, like blocking and reporting, though these functions weren’t (aren’t) always effective in addressing issues. Things are changing as a result of…
Government and the Online Safety Act 2023. Whilst the Act exists, it has taken time for it to be implemented as there is a huge amount of work to do to satisfy what the Act seeks to achieve. Ofcom is the designated regulator, who is responsible for implementing the Act’s provisions and enforcing them. In fact, Ofcom have been given a significant amount of power to enforce.
Ofcom (the regulator) - have been consulting with stakeholders and drawing up Codes of Practice for government approval. As of July 2025, age assurances must be in place to prevent children accessing online pornography. Companies with websites in-scope of the Act that are likely to be accessed by children need to take steps to protect children from harmful content and behaviour - children should be shielded from inappropriate content, e.g. content promoting self-harm. The Act requires providers to specifically consider how algorithms could impact users’ exposure to illegal content – and children’s exposure to content that is harmful to them – as part of their risk assessments. Further aspects of the Act are yet to be fleshed out. There will be additional requirements for certain categories of service. It’s also worth mentioning that adults will have more control over the content they see - which we should teach about in preparation for later life.
Schools/ other education providers need to risk assess and mitigate against exposure to harms. A big part of this is targeted education through AI literacy, as well as including AI as part of the embedded safeguarding culture. We can teach young people about the harms of AI and how they can reduce their exposure - but also, crucially, what to do if they encounter a harmful experience.
Parents are absolutely key. Schools must work with parents. This is not a new phenomenon but ever more important given the pace of change, and often lack of understanding. What is your school doing to engage and work with parents regarding AI and online safety? Parents also need to be educated to varying degrees. Parents, young people and educators all need to understand the risks of AI and how to combat them. Parents and schools need to work together to protect young people, this includes agreed consequences where a child exhibits poor online conduct. More on this next week, when we consider an example of an AI safeguarding risk and how to address it.
Part 3: More on AI and potentials harms
Last week we began to assess how AI can be used to inflict harm using the 4 C’s model. This week we will continue to look at the 4 C’s and associated risks that AI poses. Remember that you can refer to our AI and Safeguarding Glossary (scroll to bottom of page) for an explanation of the terms in bold.
Contact
Several studies warn of “grooming-style” tactics used in online spaces (e.g., Discord, Reddit offshoots, niche forums) where chatbots or pseudo-bots engage boys in conversations before guiding them toward ideologically extreme content.
Chatbots can be utilised by perpetrators in numerous ways. As well as the above, they can also bully or provide false information - like medical advice. For example, a 12 year old was told by ChatGPT that she might be depressed, which led her to start questioning whether she was. This is dangerous because she may internalise this as a fact and identify as a depressed person, and interpret normal sadness as a mental health problem.
As discussed last week, AI can also be used to create images. This can make it easier for predatory adults to appear genuine online when attempting to groom children.
Conduct
Deepfake’s are being created by predatory adults and other children, with a variety of motivations - to exploit, groom, humiliate, bully, blackmail, sexually harass. For example, Childline has reported a number of calls where young people describe how a deepfake had been created of them doing or saying something humiliating. In one case, a 14 year old boy reported a deepfake created by a group of peers of him saying that he is gay and describing sexual acts he wanted to do to other pupils. Schools do respond to these cases and will do their best to remedy the circumstances, but the long lasting adverse experience will likely remain with a young person for the rest of their life.
Algorithms create echo chambers where young people may find themselves exposed to a disproportionate quantity of fake news, particularly on social media, which may lead to young people behaving inappropriately. For example, they may be more likely to express discriminatory views that have been reinforced through their social media feed.
Commerce
Recommendation engines that suggest content based on your online activity can lead to a young person being recommended to make inappropriate purchases, like in-game purchases (‘skins’) or even weapons.
Criminals might use LLMs to help with cyber attacks or writing malware beyond their current capabilities and therefore increasing the likelihood of an attack and the effectiveness of one. As LLMs excel at replicating writing styles on demand, there is a risk of criminals using LLMs to write convincing phishing emails, which can easily deceive a young person (in fact, all of us). For example, an email from a gaming site encouraging a young person to share their personal data in order to win a prize.
Next week… we’ll begin to look at how we can mitigate the risks of AI.
Part 2: How AI can be used to inflict harm
As educators, we need to know and understand the risks of AI. Often this means learning new terms and specific jargon, like ‘deepfake’ and ‘chatbots’, as well as the platforms young people are using - it’s not just TikTok! We have a working document of terms and platforms available here (scroll to bottom of page). Please get in touch if you think we’re missing something!
Once we understand the potential risks, we can then begin to consider how we prevent or mitigate them. There are a number of role players that exist, which we’ll explore in an upcoming post, along with strategies to combat the risks.
Risks
In England, Keeping Children Safe in Education (KCSIE) describes the 4 C’s of online safety. We will use the 4 C’s framework to assess the risks of AI.
A quick reminder of the 4 C’s…
Content - viewed online (e.g. pornography, racism, misogyny, self-harm, suicide, misinformation, disinformation (including fake news) and conspiracy theories*)
Contact - interacting with other users online (e.g. an adult posing as a child online)
Conduct - the way people behave online (e.g. another child gamer inflicting verbal abuse, sending pornography)
Commerce - like online gambling, inappropriate advertising, phishing or financial scams
*these have been added to the 2025 iteration of KCSIE.
Content
The capability of AI to produce harmful content is alarming and will only become more life-like and difficult to decipher from what is the truth.
AI generated child sexual abuse materials and other harmful AI generated images are now widespread on the internet. Young people are viewing disturbing images, often unintentionally - for example, another person sends or shows them an indecent image. Often these images cause distress and can lead to an unrealistic, dangerous world view.
AI chatbots on social media are being employed to target young people and share divisive fake news. For example, there is growing evidence of young men and boys being targeted by bots and pushing content that promotes misogynistic views on platforms like TikTok and Discord.
We’ll continue to look at the 4 C’s next time…
Part 1: Safeguarding and the age of AI
Artificial Intelligence (AI) is no longer confined to tech labs or Silicon Valley boardrooms. It’s crept on us quickly and now it’s in our classrooms, our homes, and increasingly, in the lives of young people. Whether we like it or not, AI is shaping the risks children face and the ways we respond to them.
Why safeguarding professionals can’t afford to ignore AI
The speed of change is staggering. Children are using AI tools to help with homework, generate content, and even interact with AI-powered chatbots, some of which are designed to simulate friendship, offer advice, or mimic human empathy. In parallel, there are people out there that are using AI to generate deepfakes, automate grooming tactics, and exploit algorithmic loopholes to spread harmful content.
If professionals working in education and child protection more widely aren’t engaging with AI, then we are already behind. The danger isn’t just in the tools themselves, but in a lack of understanding about how they’re influencing behaviour, identity, and risk.
We are entering an era where:
AI-generated abuse imagery is almost impossible to trace using traditional methods;
Children may be influenced or harmed by systems that aren't accountable or transparent;
Safeguarding systems themselves might use AI, raising questions about data ethics, fairness, and unintended consequences.
This isn’t tomorrow’s challenge, it’s already here. The risk is that, without professional curiosity and oversight, the tools we don’t fully understand will start to shape the environments we are responsible for keeping safe. This doesn’t mean we all need to start developing technical knowledge of AI, which is often what puts us off. Just like when we teach young people PSHE, it’s not for educators to have an in-depth detailed understanding of the issue, but to have an overview, in lay terms, of a topic - its impact and how to manage it.
So where do we begin?
Over this five-part blog series, we’ll explore the double-edged nature of AI in safeguarding: its power to protect, and its potential to harm. We’ll cover how it’s being used (and misused), what ethical and legal frameworks apply, and how professionals can respond with confidence and clarity.
Safeguarding in the age of AI demands not just a working knowledge of it, but professional courage. It’s not about becoming tech experts, it’s about ensuring that human values lead technological progress, not the other way around.
What is a Guardian in the UK?
What is a Guardian in the UK?
The term ‘Guardian’ can mean different things. There are different types of guardianship, however, the overarching term ‘Guardian’ is often used, which can lead to confusion!
An Educational Guardian supports a child's welfare and educational journey whilst in the UK, providing a parental role in the absence of a parent close by. The Educational Guardian tends not to live with the student, but they offer pastoral, academic, and logistical support as needed. Educational Guardians are most relevant for international students living away from home, though there may be some cases where a British family appoint an Educational Guardian.
There are many organisations in the UK that are providing Educational Guardianship for international students, including Radius EWS.
There is no legal requirement for international students to have a Guardian. The UK’s Visa and Immigration (UKVI) framework uses the term ‘Nominated Guardian’ in relation to those coming to the UK on a ‘Child Student’ visa, but it is not a UKVI requirement to have a ‘Nominated Guardian’ for students that are living at school throughout term-time.
Many schools nevertheless make ‘Guardianship’ a condition of enrolment for all international students, and so it is a contractual requirement with the school rather than a legal requirement from UKVI. Schools may refer to a ‘Nominated Guardian’ an ‘Educational Guardian’, or simply a ‘Guardian’. The criteria for an acceptable guardianship arrangement will depend on the school policy. The Guardianship could be provided by a professional Educational Guardian, usually as part of a Guardianship organisation. Or the Guardian could be someone else (for example, a family friend) - provided they meet the school’s criteria.