SafeToWatch, End-to-End Encryption & Privacy: Commentary on WIRED Coverage

SafeToNet develops technology to make the internet a safer place, especially for children.

On Thursday the 9th of March 2023, WIRED Magazine released an article summarising concerns raised by WhatsApp CEO Will Cathcart, around the proposed Online Safety Bill in the United Kingdom. The same article also includes mention of SafeToWatch, our real-time solution to detect and prevent harmful content in images & videos.

Given the concerns voiced by Mr. Cathcart, we would like to provide a short commentary on how our SafeToWatch solution can prevent child sexual abuse material (CSAM) from being created, consumed, or distributed on digital platforms, while perfectly maintaining user privacy.

Millions of users use WhatsApp to share files daily. While WhatsApp states that technology able to scan files for harmful content while maintaining privacy doesn’t exist, the platform itself scans files shared on a regular basis. This information is publicly available and can be found on the WhatsApp FAQ section here: Suspicious Files on WhatsApp

The statement issued by WhatsApp reads: “WhatsApp automatically performs checks to determine if a file is suspicious, to ensure that the format is supported on WhatsApp and doesn’t crash the app on your device.
To protect your privacy, these checks take place entirely on your device, and because of end-to-end encryption, WhatsApp can’t see the content of your messages.”

This means that files are scanned locally by WhatsApp to protect the user from harmful content being received on their device. If embedded on WhatsApp, SafeToWatch would work in exactly the same way.
SafeToWatch can scan images & videos shared via the platform locally to check for the prevalence of child sexual abuse material. SafeToWatch does not include a reporting functionality, meaning that the platforms implementing the solution can decide what action is taken if harmful or illegal content is detected. In their FAQs WhatsApp is quoted as stating:

“For some files you share or receive in a chat, you may receive an error message.

If you’re the sender, you will see this pop-up:
  • This file is in an unusual format that may indicate it is dangerous, corrupted, or otherwise not supported on WhatsApp. For security precautions, this file cannot be sent.
If you’re the recipient, you will see this pop-up:
  • This file was sent in an unusual format that may indicate it is dangerous, corrupted, or otherwise not supported on WhatsApp. For security precautions, this file cannot be opened.”
WhatsApp also deploys the same technique when suspicious links are shared using the platform, with users receiving warning messages upon a scan performed by the platform.

A similar functionality could be deployed if SafeToWatch was to be embedded on an encrypted messaging platform. Content would be scanned for harmful or illegal content locally, and if detected the content could be stopped from being shared with a pop-up message being delivered to the user. For example: “This file coult not be send according to our Terms & Conditions.”

Of course, further safeguards can be put in place, where content that is stopped from being viewed or sent is manually reviewed by human content moderators.

If WhatsApp is able to scan files for viruses and links for suspicious content without breaking encryption, why is it that scanning for CSAM in the same manner breaks encryption?

SafeToWatch is unlike any solution available on the market, as it uses predictive analysis to prevent CSAM at the source. While other CSAM detection tools detect CSAM by matching hash data with known hashes from CSAM databases, SafeToWatch works in real-time to prevent the creation, consumption, and distribution of CSAM.

SafeToWatch will simply look at images & videos to return an analysis on the content included. SafeToWatch is a digital moderator that can advise platforms and their moderators on the suspected content of visual data. SafeToWatch does not use a database to classify content, and the analysis is carried out at device level. The data generated from the SafeToWatch training, has proven that the model is highly accurate. To maximise user experience, evaluation data is used to provide guidance on optimal confidence thresholds so that adopters can tailor SafeToWatch to their needs with each release.


Customer Support

General Inquiries