Respecting everyone’s rights to privacy and especially children.
SafeToNet exists to keep children safe online. Full stop.
We do this by investing money, time and resources into new technologies that automate the safeguarding process in the online world. That process is designed to detect and filter increasing numbers of risks and threats. To date, SafeToNet has invested over $25m in research and development. Our algorithms are powerful but we are always working to improve them especially as new threats appear and online language changes nationally and internationally, culturally and politically.
CYBERSAFETY IS IN OUR HEARTS AND MINDS.
Automated online safeguarding is a tricky and complex issue.
We have built – and are continuing to build – software that contextualizes a child’s risks online and where possible without recourse to human intervention. This isn’t just a technically complex issue, it also carries with it many social, moral and legal challenges.
We feel strongly about privacy rights – this is core to all we do – human rights, national and international law, regulatory issues plus ethical, religious and moral code. This is tricky stuff as we safeguard children around the world and work with different cultures that see risk and online behavior differently.
IT ALL BEGINS WITH PRIVACY
Other than the desire to safeguard children and young people online, the
common denominator in all we do is
privacy. The SafeToNet software
always respects the child’s rights to
In fact, SafeToNet has always operated
a ‘privacy-by-design’ culture and
works closely with children to help us
ensure we never cross that line.
Youth Advisory Board
We are proud of our Youth Advisory Board (YAB) which meets every 12 weeks to discuss features, processes and online issues.
Members of the YAB change regularly and include over 30 children of differing ages and backgrounds. They are a smart bunch of young people who keep us on our toes when issues of privacy and confidentiality arise. They want to be kept safe online but they don’t want parents to snoop or pry on them. It is a balance.
That’s a topic that is not easy to navigate especially as parents have an understandable, inherent and dare we say, legal responsibility to ensure the safety of their children. For clarity, by children we mean persons under the age of 18, as per the UN’s Convention on the Rights of the Child (UN CRC).
So we are sorry…
Moms and Dads. We understand why you want to know more about what your child is doing, what they are saying and who they are messaging, but we simply can’t tell you. The reality is that we don’t know.
Instead we share with parents their child’s overall safety level based upon the risks detected on their safeguarding keyboard. And honestly, most parents have told us they don’t want to see their child’s messages but they do want to know their child is being safe online and learning about safety as they type.
On the other hand, some parents want to know exactly who is bullying their child. The reality is that we don’t know and even if we did, we wouldn’t say.
The reasons are simple. We not only want to protect privacy but also because parents – sometimes – have a habit of jumping in with both feet and making matters worse. It is of course a moral maze however, we will not cross the lines of privacy.
So, SafeToNet’s software does not allow anyone to see what a child is saying or seeing, who they are talking to or what they are doing.
The Right to Review
Our software does not always get everything right. That is the challenge of using a computer to make decisions for you. It therefore allows a child to share information anonymously with SafeToNet's data scientists so that they can enhance the accuracy of the algorithms - this is called Automated Decision Making or ADM. ADM is best described as a computing process that makes a judgment call without a human getting involved.
It is a condition of GDPR (European Data Privacy Regulations) and is crucial when safeguarding children. Young people have a right to challenge our software when they think it gets it wrong and so with SafeToNet they can press a button on their keyboard and inform us immediately – whilst still remaining anonymous.
Once the button is pressed, the keyboard removes ALL personal details and sends the message to our data scientists who analyse the content and then manually re-train the algorithms. This is what is known as a ‘human in the loop’ and allows our AI teams to better understand how to improve the accuracy of our software.
We never know who sent us the message, nor do we know their age or gender. It is a crucial element of the safeguarding journey.
Children help us to teach and train our software so that it can safeguard them more effectively.
More and more research
There is so much academic research behind the SafeToNet software.
We learn from so many sources and not least the SafeToNet Foundation, a UK registered charity part funded by SafeToNet and which exists to search out and support projects and other initiatives that focus on safeguarding children and young people from risks stemming from cyber-bullying, sexting, online grooming, and harmful content on websites and social media. Check it out and see the work it is doing but also find time to listen to its substantial collection of interviews (podcasts) with subject matter experts from around the world.