How SafeToNet works
It has taken tens of thousands of hours to develop SafeToNet’s software and, in particular, the way its artificial intelligence algorithms make automated decisions to safeguard children online. It’s a never-ending task as language changes so rapidly as do cultural and ethical beliefs. The software is taught to correlate language with behavior patterns and to decide in real-time when to step in and prevent your child from saying anything that could harm them and hurt others.
The software is constantly in learning mode, improves with age and continues to get better every day a child uses it.
There is always room for improvement especially as social norms change and more threats to our children arise. However, our software is designed to learn in real time and to predict and detect the signs and patterns that could affect a child’s safety online.
It only finds threats
Our software only looks for issues that could harm a child. It has no understanding if the conversation is about last night’s game or the latest fashion accessory. Instead, it analyzes the words a child uses plus their patterns of behavior. For example, when a conversation changes pattern and words become shorter and messages sent more quickly, this can in certain cases indicate an argument or a discussion with sexual connotation. The software only analyzes messages being typed and sent from the child’s device.
For legal reasons, SafeToNet does not analyze incoming messages before a child has read them.
How is it trained
We employ a number of PhDs, data engineers and scientists. We also work with subject-matter experts many of whom are also PhDs in their own field. We work with universities, research groups, linguistics specialists, cyberpsychologists, cyber criminologists, youth advisory groups and more.
We have studied seemingly endless academic articles and have written white papers of our own. We speak at conferences and seminars where we also listen and learn. We consult with children and parents, teachers and carers. We employ our own safeguarding team run by specialists who have seen first hand the issues that children face online and offline.
It is from all of these inputs that we continually train our algorithms to look for patterns that suggest threat and risk to a child. This includes the complex fields of sentiment, predictive, statistical and correlative analytics.
DATA COMES NEXT
Over time we have produced training apps that periodically ask parents and children for permission to access their data so that we can teach our models/algorithms.
We temporarily make these training apps available online, collect data and then withdraw them. This is an ongoing process to ensure we remain up to date with current trends, detect more threats and by default safeguard more children. Note, the SafeToNet software is not a training app and does not allow the child’s data to ever leave their device unless the child has allowed it to do so.
On-device Learning and Training
This is really hard and tricky stuff especially as our software runs purely on the child’s device. This is fundamental to how our software works but also to our core mission of keeping a child safe while protecting their rights to privacy. To evidence this, simply put your child’s device in airplane mode and you will see the SafeToNet software still works.
SafeToNet’s R&D teams have developed a proprietary process in the complex field of Federated Learning (FL). FL allows us to develop our algorithms (better known as models) so that they can be distributed over millions of mobile devices without compromising privacy. It helps us to personalize the models for an individual user – crucial in the field of safeguarding as so many children are different – age, gender, nationality, beliefs, upbringing etc.
FL allows us to train the models locally on a child’s device and then shares the patterns it has been taught – not the child’s data or messages – with all other devices using SafeToNet.
MORE AND MORE RESEARCH
SafeToNet also helps to fund the SafeToNet Foundation, a UK registered charity that exists to search out and support projects, and other initiatives that focus on safeguarding children from risks stemming from cyber-bullying, sexting, online grooming, and harmful content on websites and social media.
Check it out and see the work it is doing but also find time to listen to its substantial collection of podcast interviews with subject matter experts from around the world.
BUILDING DIGITAL RESILIENCE
We cannot change human behavior to prevent bullies from bullying or predators from predating. However we can help improve the resilience of children such that they are better armed and able to deal with the largely unregulated internet.
The way that we do this is to educate and inform both parents and children on appropriate actions to take and behaviors to follow which will help mitigate the risks so often encountered online so that children are more resilient digital citizens.
The SafeToNet safeguarding software does this “in the moment” and as risks appear.
The SafeToNet software was designed from the outset with privacy in mind. It uses artificial intelligence (AI) to automatically detect and filter risk.
As everything is handled without human intervention, the software – like humans – can sometimes get it wrong. Sadly it cannot be perfect, however it works by getting to know your child over a period of time. We call this Machine Learning.
During the Machine Learning phase, the software will make some mistakes but bear with it – like most of us it improves with age!
As time passes the software will become more accurate and will better safeguard your child.
HOW SAFETY IS MEASURED
The Safety Indicator adjusts depending on many factors. These include: the risk level of messages being typed; how many are filtered over a given period of time (e.g. many filtered messages in quick succession can indicate signs of aggression etc.); the time of day messages are being written; anomalies to behavior patterns that can suggest risk and many more.
YOUR CHILD SEES WHAT YOU SEE
The Safety Indicator is mirrored on both the parent’s and child’s devices. This helps a child to better understand how their risk changes throughout the day, guiding them to be safer digital citizens and, crucially, forming the basis of an informed conversation between parent and child.