Facebook takes action on child nudity and sexual exploitation

Facebook hit the headlines again recently, but for a very positive reason. Apparently, in a three-month period, the social media giant has removed 8.7 million images which violated policies on child nudity or sexual exploitation of children. Here, Karl Hopwood, Helpline Coordinator, gives his view on the news.

Date 2018-10-30 Author Karl Hopwood, Helpline Coordinator
picture

This is really important and Facebook should be applauded. Many will know that Facebook (unlike some of the other social media platforms) has a very strict policy on nudity; this means that some of the content would not necessarily be seen as illegal.

Apparently these 8.7 million images all violated the policies which relate to child nudity or sexual exploitation of children (so this is different to the 21 million pieces of content that Facebook took action on in Q1 (2018) for violating the adult nudity and sexual activity policy – you can read more about this in the Facebook Transparency Report). The child nudity and sexual exploitation of children policy rationale states:

"We do not allow content that sexually exploits or endangers children. We know that sometimes people share nude images of their own children with good intentions; however, we generally remove these images because of the potential for abuse by others and to help avoid the possibility of other people reusing or misappropriating the images".

We know that such images have been misused by others. The police have talked about innocent images being scraped from Facebook and other social media sites and then found in "collections" belonging to paedophiles. Indeed, in 2015 the Children's eSafety Commissioner in Australia claimed that "innocent photos of children originally posted on social media and family blogs accounts for up to half the material found on some paedophile image sharing sites". Facebook taking action on images shared with good intentions makes sense.

Another interesting aspect to this story is that Facebook removed 99 per cent of the content before anyone had to report it. Clearly this does not mean that nobody saw it or that nobody downloaded it, but it shows the power of artificial intelligence and machine learning in identifying this type of content so it can be removed.

It is very easy to criticise the social media giants and say that they are not doing enough to protect their users. It is easy to blame the industry for all of the risks that children and young people face when they go online. However Facebook have, for many years, led the way in combatting this type of abuse online. They have developed the tools that can spot and remove this content. Yes, there is always more to be done and some would say that blaming the tech companies is an easy option and takes some of the pressure off individuals: parents who post the images in the first place, others who may sexualise such images with their comments or even re-post and misuse the content. The tech companies can always do more, but this statement from Facebook is a clear demonstration that they are making efforts.

It is often the big players such as Facebook that understandably make the headlines, but what about other smaller platforms which are still used by millions of young people? The Insafe network has long established relationships with many providers, including Facebook, but some still remain elusive and Safer Internet Centres (SICs) would welcome a dialogue which would help to keep children and young people safer when using their services. Facebook has said that it will work with Microsoft and other industry partners in order to make tools that other smaller companies can use.

The 8.7 million images has certainly grabbed the headlines but there is perhaps a greater underlying concern which highlights the need for better education and awareness raising for all. Why did so many users think that it was acceptable to post such content in the first place? In her blog post, Facebook's Global Head of Safety, Antigone Davis states that to "avoid the potential for abuse, we take action on nonsexual content as well, like seemingly benign photos of children in the bath". This approach has to be the correct one if (as mentioned earlier) we know that some people are misusing this type of content. If some people want to share images like this with family and close friends in order to capture a childhood memory, then perhaps the internet is not the place to do it.

As with many aspects of online safety this is about behaviour rather than technology but, as Facebook has clearly demonstrated this week, technology has a huge role to play. They have also announced that they developing new software that will help NCMEC (National Centre for Missing and Exploited Children) to prioritise the reports that it shares with law enforcement agencies (LEAs) around the world so that the more serious cases can be addressed first. Surely this has to be welcomed and applauded. In 2017, UK-based Chief Constable Simon Bailey said that "The police service is dealing with an unprecedented volume of reports of child sexual abuse - non-recent abuse, ongoing abuse, online abuse and peer-to-peer abuse. The numbers are continuing to rise. We have reached saturation point".  This announcement from Facebook can only help the police and others to be more effective in their fight against child exploitation and abuse.

It is easy to criticise but sometimes it is also important to commend.

The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of the Better Internet for Kids Portal, European Schoolnet, the European Commission or any related organisations or parties.

See the Facebook entry in the Better Internet for Kids (BIK) Guide to online services, a tool which provides key information about some of the most popular apps, social networking sites and other platforms which are commonly being used by children and young people (and adults) today.

Related news