Part 4: Mitigating the risks of AI
Exposure to risk is part of life. It’s always important to acknowledge this when discussing risks and how to mitigate them - it would be naive to think that we can prevent young people's exposure to the potential harms of AI. So the key questions are:
How significant is the risk?
How likely is the risk of occurring?
What can we do to reduce the significance of the potential impact and the likelihood of it occurring?
Given that prevention ought to be assumed impossible, how can we equip young people to deal with a harmful experience when it occurs?
It’s possible to engage young people in the above process - to pose a potential risk (for example, exposure to algorithms that create echo chambers) and ask them to consider each question, as part of a carefully planned session or series of sessions. In fact, it is incredibly powerful to take this approach with many (not all) young people as it places the ownership and problem solving thought process on them.
Roles and responsibilities
We need to teach young people to be autonomous risk assessors, but this requires support and guidance. Who are the role players in supporting and protecting young people from AI and online harms more generally?
Organisations (search services and services that allow users to post content online or to interact with each other, e.g. social media platforms). Until recently, the reality was that such organisations wouldn’t do nearly enough. There were some safeguards in place, like blocking and reporting, though these functions weren’t (aren’t) always effective in addressing issues. Things are changing as a result of…
Government and the Online Safety Act 2023. Whilst the Act exists, it has taken time for it to be implemented as there is a huge amount of work to do to satisfy what the Act seeks to achieve. Ofcom is the designated regulator, who is responsible for implementing the Act’s provisions and enforcing them. In fact, Ofcom have been given a significant amount of power to enforce.
Ofcom (the regulator) - have been consulting with stakeholders and drawing up Codes of Practice for government approval. As of July 2025, age assurances must be in place to prevent children accessing online pornography. Companies with websites in-scope of the Act that are likely to be accessed by children need to take steps to protect children from harmful content and behaviour - children should be shielded from inappropriate content, e.g. content promoting self-harm. The Act requires providers to specifically consider how algorithms could impact users’ exposure to illegal content – and children’s exposure to content that is harmful to them – as part of their risk assessments. Further aspects of the Act are yet to be fleshed out. There will be additional requirements for certain categories of service. It’s also worth mentioning that adults will have more control over the content they see - which we should teach about in preparation for later life.
Schools/ other education providers need to risk assess and mitigate against exposure to harms. A big part of this is targeted education through AI literacy, as well as including AI as part of the embedded safeguarding culture. We can teach young people about the harms of AI and how they can reduce their exposure - but also, crucially, what to do if they encounter a harmful experience.
Parents are absolutely key. Schools must work with parents. This is not a new phenomenon but ever more important given the pace of change, and often lack of understanding. What is your school doing to engage and work with parents regarding AI and online safety? Parents also need to be educated to varying degrees. Parents, young people and educators all need to understand the risks of AI and how to combat them. Parents and schools need to work together to protect young people, this includes agreed consequences where a child exhibits poor online conduct. More on this next week, when we consider an example of an AI safeguarding risk and how to address it.