Part 3: More on AI and potential harms

Last week we began to assess how AI can be used to inflict harm using the 4 C’s model. This week we will continue to look at the 4 C’s and associated risks that AI poses. Remember that you can refer to our AI and Safeguarding Glossary (scroll to bottom of page) for an explanation of the terms in bold.

Contact

Several studies warn of “grooming-style” tactics used in online spaces (e.g., Discord, Reddit offshoots, niche forums) where chatbots or pseudo-bots engage boys in conversations before guiding them toward ideologically extreme content.

Chatbots can be utilised by perpetrators in numerous ways. As well as the above, they can also bully or provide false information - like medical advice. For example, a 12 year old was told by ChatGPT that she might be depressed, which led her to start questioning whether she was. This is dangerous because she may internalise this as a fact and identify as a depressed person, and interpret normal sadness as a mental health problem. 

As discussed last week, AI can also be used to create images. This can make it easier for predatory adults to appear genuine online when attempting to groom children. 

Conduct

Deepfake’s are being created by predatory adults and other children, with a variety of motivations - to exploit, groom, humiliate, bully, blackmail, sexually harass. For example, Childline has reported a number of calls where young people describe how a deepfake had been created of them doing or saying something humiliating. In one case, a 14 year old boy reported a deepfake created by a group of peers of him saying that he is gay and describing sexual acts he wanted to do to other pupils. Schools do respond to these cases and will do their best to remedy the circumstances, but the long lasting adverse experience will likely remain with a young person for the rest of their life. 

Algorithms create echo chambers where young people may find themselves exposed to a disproportionate quantity of fake news, particularly on social media, which may lead to young people behaving inappropriately. For example, they may be more likely to express discriminatory views that have been reinforced through their social media feed.  

Commerce

Recommendation engines that suggest content based on your online activity can lead to a young person being recommended to make inappropriate purchases, like in-game purchases (‘skins’) or even weapons. 

Criminals might use LLMs to help with cyber attacks or writing malware beyond their current capabilities and therefore increasing the likelihood of an attack and the effectiveness of one. As LLMs excel at replicating writing styles on demand, there is a risk of criminals using LLMs to write convincing phishing emails, which can easily deceive a young person (in fact, all of us). For example, an email from a gaming site encouraging a young person to share their personal data in order to win a prize. 

Next week… we’ll begin to look at how we can mitigate the risks of AI.

Previous
Previous

Part 4: Mitigating the risks of AI

Next
Next

Part 2: How AI can be used to inflict harm