Part 5: The risks of AI - how to deal with a concern

In this final instalment, we will consider the practicalities of safeguarding against the risks of AI - what professionals actually need to do. 

At the school level, PSHE/ RSE leads should be periodically reviewing and updating the curriculum to embed emerging issues and risks. Senior leaders ought to consider how the school can engage parents to keep them up-to-date and informed. These are universal preventative principles. But what about when a young person experiences abuse involving AI? Let’s look at a specific example, which was touched upon in Part 3 - deepfakes.

Scenario: a deepfake video of a young person masturbating is created by another pupil in the same year group and shared in a number of WhatsApp and Discord group chats. 

  • How significant is the risk? 

Severe - a young person is likely to find this deeply humiliating and upsetting, impacting their mental health and potentially behaviour and academic performance. 

  • How likely is the risk of occurring?

To an extent this is a contextual question and therefore depends on the school, local area, etc. But in the main, the likelihood is high and increasing because of how easy it is to create such material and the fact that creating deepfakes is, arguably, becoming normalised. 

  • What can we do to reduce the significance of the potential impact and the likelihood of it occurring? 

Preventative education - teaching young people about deepfakes, the law around them, the ethics and impact when used to inflict harm, the consequences as a perpetrator, how to handle the situation as a victim (reporting and support).

  • Given that prevention ought to be assumed impossible, how can we equip young people to deal with a harmful experience when it occurs? 

Firstly, a strong safeguarding culture should enable a young person to come forward and disclose - or for a friend to do so on the victims behalf. Young people won’t always disclose and may not have an ally that will for them - they may even ask their friends specifically not to share the issue with staff. This is why we must always be alert to changes in behaviour and other warning signs that may indicate something isn’t right. We can then approach the young person in an appropriate way to check-in with them, and/or report to the DSL. 

With the deepfake incident described above, the DSL needs to know straight away. The trusted adult that the young person has disclosed to should: listen to what the young person has to say; ask questions for clarity (who, what, where, when - all the facts); ensure they are non-judgemental and objective; ensure they Don’t view the deepfake; reassure the young person that they have done the right thing and experiencing something like this is bound to be upsetting, explain that they will need to pass this on to the person in charge of safeguarding. In this scenario, the trusted adult ought to escort the young person directly to the DSL or a DDSL and pass along the factual account of the disclosure, then return to their work without discussing the matter with anyone else. 

Then what?

The focus must be on the wellbeing of the child. Ensuring the victim feels supported is essential, this will involve parental engagement as soon as practically possible. The young person may want to leave school for the rest of that day. The victim and parent must be assured that this behaviour is unacceptable and there will be consequences for the perpetrator. 

Technically, the perpetrator has broken the law (Sexual Offences Act 2003). The matter will need to be referred to the police via the local authority (MASH) if there is evidence of abusive and/or aggravating factors. To determine this, the perpetrator must be interviewed by the DSL and, ideally, another member of staff that is level 3 safeguarding trained. The perpetrator should be questioned carefully (you’ll get more out of them if you go in neutral rather than too stern, though you may need to be firm). The DSL is within their rights to, and should, ask the perpetrator to delete (for everyone) the video from all the WhatsApp and Discord group chats that the deepfake was shared in - and to show them the evidence that it has been deleted on screen. The perpetrator should also be instructed to delete the video from their camera roll. 

There needs to be a real effort made to coach the perpetrator to assess their own behaviour and understand that they have done something wrong (morally - not just against the school rules). The objective is for them to accept responsibility through a restorative conversation, and even identify how they can make it right. The perpetrator’s parents must then be engaged and brought up to speed. It may be that the perpetrator has not been co-operative and then parental assistance may help move things in the right direction. The DSL can lean on the law if necessary - the perpetrator needs to know they have broken the law, whether the police are involved depends on the perpetrator’s response. The consequences will be in line with the school behaviour policy - for such a matter, the consequences of sharing a deepfake video like this ought to be significant and known amongst the school community. 

Even if the perpetrator takes full responsibility and cooperates, it’s not over. The victim needs to be informed of the outcome and it may be appropriate to engineer a restorative conversation between the victim and perpetrator. The victim will need periodic pastoral support and relevant staff should be hyper vigilant to further potential issues concerning both the victim and perpetrator. There may be vulnerabilities around the victim that require a personal risk assessment. And remember, always devise a chronological log to record everything - conversations, actions, next steps. 

It’s not easy

The example given is only one potential safeguarding risk associated with AI. There are many. This is why staff training is so essential - so that there is an awareness of the issues and how to address them. There is no quick fix, and when issues arise the work in resolving them is huge. The right balance of education, clear consequences for inappropriate conduct and a culture of listening and openness is crucial - as well as ensuring that schools work with parents to protect their children.

Next
Next

Part 4: Mitigating the risks of AI