SAFEGUARDING

AND DIGITAL WELLBEING

Apple continues to raise eyebrows

From the launch of the Apple Macintosh in 1984, through to the Apple iPod in 2001 and the first iPhone in 2007, Apple’s new technology announcements have been met with a fanfare that has resonated around the world. 

When Apple made their recent announcement about Child Safety they did so with a similar fanfare. Apple’s Director of Investigations and Child Safety, Melissa Marrus Polinsky stated “We’ve invented the future of child safety.” This statement was retracted shortly afterwards and replaced with a more subtle #soproud

The point here is not to criticise but to highlight that online child safety is not about fanfares and claims of inventing the future. And it is not about just one company. It is, and always will be, about collaboration and especially with those that genuinely understand the problems being addressed. The safety tech industry knows beyond any doubt that Apple cannot genuinely understand the complex landscape of online child sexual abuse. If it did, it would not make the claims it does. The reality is that the tech companies, NGO’s, charities, law enforcement and most importantly the survivors of abuse, must work as a team if we are ever going to eradicate the global pandemic of online offending. 

In the days that have followed the initial Apple announcement, we have seen several clarifications and admissions from Apple’s Senior Vice President of Software Engineering, Craig Federighi, that the announcement was confusing. Announcing two distinctly different developments at the same time was never going to work out well but even that does not excuse the language being used. 

Words matter 

In the past few years, we have seen a welcome development in tackling online child abuse. The vast majority of those involved in the various aspects of dealing with the problem now use the term CSAM (Child Sexual Abuse Material) instead of the abhorrent term ‘child pornography.’ In Mr Federighi’s 12-minute interview with the Wall Street Journal he says child pornography on 4 occasions and does not mention CSAM once. He and Apple unfortunately lost all credibility at this point, and it was easily avoidable had they genuinely known what they were talking about. It was these words that immediately raised the eyebrows of the safety tech industry. We all knew there was a degree of showmanship going on. Yet another fanfare but one that this time carries serious consequences. 

Words are important. Child Pornography implies consent, and a child cannot legally consent to being abused. It also plays down the severity of the images and videos in circulation. In the days following the Apple announcement several privacy campaigners asked, “what about if Apple identify an innocent picture of a child in a bath?” The public need to know the uncomfortable truth, much of the CSAM in circulation relates to children under 10 years of age and some literally a few hours old. These are images and videos of children being sexually abused and tortured. 

The reasons why the terminology is so important can be found in the Luxembourg Guidelines here

So, what’s this got to do with Apple? Well, every professional involved in online child safety and every survivor will tell you they will never use the term child pornography. Surely Apple have experts in the field advising them on such fundamental matters?  

Some may argue that child pornography is the legal definition and that is certainly the case in the USA. But if you know, you know. There is a straightforward way to address this when discussing the issue. Refer at the outset to the legal definition and then state that you will refer to it as CSAM from thereon. 

Threshold 

We have discussed this in a previous article from SafeToNet. One piece of CSAM is one too many. One on its own is illegal. One on its own represents an abused child, a life ruined. Mr. Federighi has now told us that Apple’s systems don’t report child abuse until they have seen 30 separate CSAM files from a single user. We are not being facetious when we say that this implies that 29 is OK. We know Mr. Federighi’s point links to accuracy but that is a nonsense. If you have one piece of CSAM the relevant authorities should be made aware. No question. Remember these are hashed files where the accuracy of the hash is more reliable than DNA. Imagine a murder scene where the CSI recovers the offenders DNA but cannot use it in evidence because they only recovered 29 separate samples! 

Since the initial Apple announcement concerns have been raised that if the threshold is just one file, it may be possible that an account is incorrectly flagged. Aside from the reliability of a hash, it is also important to remember that the referral will be checked for accuracy by NCMEC and by the receiving law enforcement agency before any action is taken. 

To be clear, if Apple remove the threshold and make the decision to refer to NCMEC based upon one image alone, then we would whole heartedly support this functionality. We are sure our colleagues in law enforcement would too.  

Privacy is never an excuse to allow possession of illegal material 

Apple’s ‘in the iCloud’ CSAM detection technology has caused worldwide controversy. Privacy campaigners claim it marks the beginning of the end of an individual’s privacy, but perhaps the privacy Apple talk so fervently about isn’t quite what it seems anyway? NCMEC statistics show Apple only reported 265 cases in 2020. This seems to corroborate that they can see little CSAM activity on their platform. For comparison, Facebook reported on 20,307,016 occasions to NCMEC in 2020. Apple is therefore in the ‘catch-up game.’ 

However, publicly available internal iMessages from February 2020, unsurprisingly titled ‘Highly confidential – attorneys’ eyes only’ from Eric Friedman, Apple’s head of Fraud Engineering Algorithms and Risk Unit (FEAR), suggest senior Apple figures have known for some time that their network is riddled with CSAM. In conversation with Herve Sibert, Apple’s Security and Fraud Engineering Manager, Friedman writes “The spotlight at Facebook etc. is all on trust and safety, in privacy they suck. Our priorities are the inverse.” He then states “Which is why we are the greatest platform for distributing child porn etc.” 

Is this anecdotal or do Apple see more than they claim and use privacy as an excuse for not disclosing to NCMEC? Whatever the case, it is clear that Apple must do something – indeed all platforms do whether they be device manufacturers or social networks, messaging apps or gaming platforms. 

Regardless of this point, possession, and distribution of CSAM is illegal and it can only ever be a good thing that Apple are seeking to find ways to prevent it. Anyone who cannot see that is sadly missing the overall point of protecting children both online and offline. So, in this instance well done Apple. However, this technology should have been subject of its own press release. It has cast a shadow over the other child safety features they have announced, which are worthy of more discussion around privacy and legalities. By Apple’s own admission they made a mess of these announcements which saddens us. There is a good story to tell here but by going for the fanfare, they have undone a lot of good and have placed themselves and the tackling of child sexual abuse in a challenging spotlight.  

Non CSAM Child Safety Technology 

The second part of the Apple announcement relates to child safety features and has the prospect of being the most important for children themselves. To explain this statement in simple terms let us think of the CSAM detection as a reactive tool where sadly the abuse of a child has already been committed. Whereas the child safety features can be proactive and prevent children being exposed to and sharing content in real time. 

Apple has called this feature Communication Safety in Messages and states that the tool will work within iMessage’s to warn children and their parents when receiving or sending sexually explicit photos. We encourage this on-device methodology and see it as the future of online child safety, particularly as end-to-end encryption expands to more aspects of our online lives. We have gone a step further at SafeToNet with our SafeToWatch live-stream video filter which prevents a child from both consuming and producing sensitive imagery. This technology is platform agnostic and works on all messaging apps and social networks i.e., not just iMessage. Like other safety technology companies, we understand in detail the issues of privacy and the need to ensure the child’s rights are protected and defended. Therefore, we raised our eyebrows once more with Apple’s announcement. 

Sadly, they were raised once more when Apple said it will only deploy its image detection technology on a child’s device where they are 17 years of age or younger and will not notify the parents when the child is aged 13 – 17 years. Mmmmm. How do they know how old the user is? Age verification technology is the holy grail of the safety tech industry. If you accurately know a user’s age then you can immediately, for example, stop a 12-year-old from using Instagram, Tik Tok et al. So why don’t you Apple? Maybe if they did stop underage usage of globally popular apps, the young would want an Android device rather than an iPhone.  

Unanswered Questions 

Apple wants an entire household to use its Family Sharing technology. It’s where all the bells and whistles can be found and of course ‘ties’ users into a lifetime of Apple’s products. We all understand why that makes sense to Apple’s shareholders. But to the age verification question and privacy, the Apple solution is only as reliable as the adult controlling the family sharing account. Many parents would admit that they agree to alter their child’s date of birth so that they can play age restricted games and use age restricted platforms. The complexities of age verification are considerable. For example, a child can marry in the State of Massachusetts at the age of 12 with their parents’ consent. Who would receive notification of their communications?  

This raises some interesting questions about how Apple’s image detection technology can alert a parent if it is found on their child’s device. Now don’t get us wrong, we firmly believe that technology that detects and prevents the consumption of sensitive imagery on a child’s phone should be available by default. But it is the privacy of the child that must be balanced with the needs of safety. Implementing systems like Apple’s can cause the child to go underground. It is clear they will simply not use iMessage if they think the technology is spying on them. 

Online Child Safety – Teamwork 

So, our final questions are: will Apple’s developments result in the safeguarding and protection of more children from harm or is the scope of the additions too narrow to make a major difference? Do they really understand the problems they are tackling, or do they simply just want to look good in the eyes of those that only read the headlines? This should be about protecting children, not simply cleaning up areas of their platform and displacing the activity. 

And our ongoing challenge to Apple is simple. Please engage with the worldwide community looking to tackle online child safety. We will help you to tackle this complex issue for the benefit of your customers and most importantly the children who love and rely so heavily on your devices and technology. 


Tom Farrell QPM is SafeToNet’s Global Head of Safeguarding Alliances

UNITED KINGDOM

Customer Support

General Inquiries