There have been so many articles in the press about the impact that cyberbullying and grooming for sexual exploitation can have on children that no one can be left in any doubt that this happens. What many people don’t realise is that this abuse of children takes place on an industrial, international, scale.
The conclusion of many researchers in online child safety [Professor Sonia Livingstone for example] is that the best form of online child protection is for parents or carers to have open discussions with children about the risks that children face online, and advise on what steps children should take if things get out of control. The problem is that, generally speaking, while parents are “life savvy”, they are “social media naïve”, whereas children are “social media savvy” and almost by definition are “life naïve”.
So how can this gap be bridged to enable the informed discussions that leading researchers believe is the appropriate way forward to ensure online safety for children? Where in the whole of the “internet ecosystem” is it possible to provide effective advice and guidance to children? When should that be done and what should the advice and guidance be about? And who should supply it anyway?
Ernie Allen, Chair of the WeProtect Global Alliance, believes that as this problem is caused by technology, that technology can help solve the problem. Could a technical solution be implemented in the home router? Well, it could, but what happens when the child leaves home with their smartphone, or doesn’t use the home Wi-Fi network, using instead the 4G cellular network? Could it be implemented by a social media platform such as TikTok? Well, it could but we know that children use diverse social media applications, such as Telegram, Yolo, Instagram and others, all at the same time. Could a solution be implemented by Apple? Well, yes.. but many families have mixed devices, so what about Android?
The keyboard is key
It seems to us that the only place in the entire internet ecosystem that could provide a consistent approach to online safeguarding, that is device independent (works on iPhones, Android phones, as well as tablets), that is platform agnostic (it doesn’t matter which social media networks the children are using, or even if the social media networks are encrypted) and has multi-language capability (turns out children live all over the world), is in fact the smartphone’s keyboard.
Our software acts an intelligent safeguarding keyboard that runs on a child’s device. It guides and educates them in real-time helping them to become safer digital citizens. It uses AI to “contextualise” what a child is typing but crucially, it does this whilst always respecting their fundamental rights to privacy. This is all done on the device; the messages that a child types never leave the child’s phone for analysis on external servers, and parents never get to know what their child is saying or who they are talking to. This is vital to all we do, so much so that we voluntarily went through detailed and rigorous testing with the UK’s Information Commissioner’s Office and came out with a clean bill of health.
As the software is fully automated, it may sometimes get something wrong. It is designed to filter the most harmful of messages and prevent them from being sent before the child hurts themselves and others. The software is designed to over-filter i.e. if in doubt the system would rather safeguard than leave the child potentially at risk.
Because of this we give the child an opportunity to tell us if the software is wrong and as such they can press a button on their keyboard – which first of all removes all of their personal details – and then sends the words that were flagged to SafeToNet’s cyber-psychology team. The team has no idea who sent them the message. This is a key element of GDPR (the General Data Protection Regulation) as well as it being core to all we stand for i.e. protecting the privacy rights of the child.
A cheeky Nando’s
As an example, recently our cyber-psychology team started seeing words related to food and the first instinct was that our software was getting it wrong. However, things were not adding up as the software doesn’t just analyse words, it also detects anomalies in a child’s normal behaviour patterns and as such can detect risk when a child goes quiet, or becomes more active or for example shows signs of aggression etc. This is how the software contextualises what is going on. For example the software can detect if a child is showing signs of fear or stress and anxiety.
A good way of explaining this is to discuss how people, children, adults, all of us, argue. When we argue, we tend to use shortened words, sent rapidly. We tend to talk over each other “no you didn’t, yes you didn’t, no you didn’t” etc. We don’t use long-winded and lengthy statements and we rarely allow the other person to talk. This is a classic behaviour pattern that our software can detect.
Sexual discussions have similar though not identical patterns. Again they tend to be short. Words are few. It tends to finish fairly quickly i.e. the dialogue doesn’t last for hours.
Back to the food example; our software was picking up words relating to Nando’s menu alongside patterns of behaviour that suggested sexual connotation. Other words the software detected including ‘Wing Roulette”, which means a girl is being passed around a group, “perinaise” means a girl that is sexy. There are other words too but they are more explicit and probably not suitable for this blog entry.
As well as helping to safeguard children by filtering out content that can lead children to risk of online harm such as cyberbullying or sexting, we use the keyboard to provide real-time advice and guidance to children about safer approaches to being online. This advice and guidance comes from our safeguarding lead, Sarah Castro MBE, a recognised subject matter expert who, along with our Youth Advisory Board, informs many of our product development ideas.
Why real-time advice and guidance? Because it’s in the heat of the moment that heads need to cool, before the “red mist” descends and things escalate out of control.
Despite research showing that social media usage creates depressive symptoms in its teenage users, and despite all the other online harms that children are subject to, social media is here to stay and children will continue to use it. We have also built into our AI-powered keyboard a range of mental wellbeing features that help children better manage their emotions, so that they can recognise the impact of online behaviours towards them from others and they are better equipped to deal with it. All of this, the self-diagnosis tools, recovery exercises and emotion diary, are totally private to the child.
Parents are included
We also provide parents with some software too. Why do we do this and what does it do? This comes back to encouraging parents or carers to talk to their children. How does a parent even begin a conversation if they discover that their 11 year old daughter has been sending intimate images to persons known or unknown? This happens. We provide information to the parent so that they have some knowledge of social media and app-based culture to help them at least start these much needed and important conversations.
Do we show the parents what their children are doing online? No. On the contrary. We actually show the children what the parent sees, and this amounts to graphs showing the child’s trends towards or away from online risk. Technically we have the ability to show parents a great deal of detail about their child’s online behaviours, but we don’t because that approach completely ignores children’s rights to privacy.
Children also have a right to safety too, and this is the role of the parent. But all too often parents don’t know what’s going on in their child’s bedroom, or the locked family bathroom, two of the places where the Internet Watch Foundation tells us the worst kind of child abuse takes place, using the child’s own smartphone. Hence the advice and guidance to parents, along with some interactive questionnaires to help them triage what might be going on.
Children are so vulnerable online and live in a world that their parents commonly don’t understand. It is a difficult balancing act protecting a child’s rights to privacy and keeping them safe online. We have built our software using privacy at its core and as such it detects when a child is in trouble and tries to guide them to safety. Every time the software detects a high-risk message it stops it from being sent. This is how SafeToNet is helping to make the online world a safer place for children.
Download our safeguarding and wellbeing app today from the Appstore of your choice.