Insafe insights on... inappropriate content

The “Insafe insights…” series draws on the experience and expertise of the Insafe network to tackle some of the most topical issues encountered in its day-to-day operations. Drawing on statistics and helpline case studies, this resource aims to outline the issue and some possible responses, while also pointing to sources of further information and support.

2020-10-12 BIK Team potentially harmful content
Young boy looking at a smartphone

The Insafe network of awareness centres, helplines and youth panels, in partnership with INHOPE (the International Association of Internet Hotlines, dedicated to the removal of illegal online content), operate Safer Internet Centres (SICs) in EU Member States, Iceland, Norway, Russia and the United Kingdom in the drive to keep children and young people safe online. Through a range of services, SICs respond to the latest online issues, helping to promote the many opportunities the online world offers, while also addressing the challenges. And while Europe’s children and youth are the main benefactors of this work, the Better Internet for Kids (BIK) programme of activities also reaches out to, and collaborates with, a range of other stakeholders – parents and carers, teachers and educators, researchers, industry, civil society, decision makers and law enforcement.

The “Insafe insights…” series draws on the experience and expertise of the Insafe network to tackle some of the most topical issues encountered in its day-to-day operations. Drawing on statistics and helpline case studies, this resource aims to outline the issue and some possible responses, while also pointing to sources of further information and support.

Inappropriate content… a definition

Inappropriate content is a term used to capture a wide and ever-increasing number of different types of problematic content online, from hate speech to images of self-harm or pro-ana websites. Alongside all of the amazing content that is available online there is some which is inappropriate for different audiences. 

The EU Kids Online project classified online opportunities and risks for children and identified the following risks related to content:

  • Advertising, spam, sponsorship
  • Violent, gruesome, hateful content
  • Pornographic, harmful sexual content
  • Racist, biased information and advice (such as regarding drugs)

The EU Kids Online project also asked children and young people what bothered them when they are online. The graph below shows the content-related risks that they identified.

Bar chart showing the main online risks mentioned by children and young people interviewed in the framework of the EU Kids Online project. Described under the heading ‘Figure 1 description’ at the bottom of the page

 

Figure 1

 

More recently, the Global Kids Online network highlighted the following with regard to online risks for children:

  • Children were most likely to report being upset in the past year if they had encountered hate speech or sexual content online, been treated in a hurtful way online or offline, or met someone face to face that they had first got to know online.
  • The number of online activities in which children engage, the digital skills they develop and the online risks they encounter all increase as children get older. The increases in these variables are all likely to be related.

The research looked specifically to find out the percentages of children who had been exposed to online risks under the following categories:

  • Self-harm content
  • Suicide content
  • Hate speech
  • Violent content
  • Sexual content
  • Being treated in a hurtful way
  • Meeting someone face to face

The first five listed would be covered by the “harmful content” umbrella. The key findings are shown below.

Bar chart showing the percentage of children who have been exposed to online risks, by country. Described under the heading ‘Figure 2 description’ at the bottom of the page

 

Figure 2

 

There have been a number of research papers which have asked children and young people what bothers them most when they are online. The graphic below comes from a 2018 study of almost 40,000 children and young people in the UK who were asked to identify the worst things that can happen online.

Cloud of words showing the different types of inappropriate content: self harm and suicide, hate speech, bullying, fighting, violent and obscene videos, sexual approaches from adults, animals being hurt, being asked for nudes, and pornography

 

The Insafe helpline network deals with all kinds of inappropriate content from callers and some of the definitions they use are listed here:

Advertising/commercialismChain emails, phishing sites, misleading policies, terms and conditions.
GamingFor any issues related to gaming content, possible addiction, and so on.
Hate speechDiscrimination or prejudice against others on account of their race, religion, ethnic origin, sexual orientation, disability or gender – this could include racist materials online or racist comments which have been made by a group or individual.
Potentially harmful content

Including terrorism, online prostitution, drugs, eating disorders, self-harm, and so on. Including calls related to sites promoting suicide and explaining ways to commit suicide. This may include referrals to a hotline.

Clearly, there is some content online which could be offensive or upsetting to individuals, but which could be perfectly acceptable to others. Some social media providers will warn users about such content which they describe as sensitive; such content will not be immediately visible to users. This has provoked criticism from some who have suggested that it simply highlights this type of content and encourages the curious or vulnerable to click on it.

Parents and carers have a lot of concerns around inappropriate content; they worry about the types of things that their children might be seeing online and the potential damage that this can do – for example, the impact that it might have on their behaviour, health and wellbeing.

Unfortunately, there is a lot of hype in the press around some of the risks, but they clearly do exist. Many parents will try and limit their children’s exposure to inappropriate content by using filters and by restricting devices. This is important and definitely something that many parents do with offline issues. The problem with the online manifestation of inappropriate content is that it is much more difficult to manage. One parent may have successfully restricted their broadband and even locked down mobile devices but when a child spends time with other young people, we cannot assume that their levels of filtering and monitoring are the same or that they even exist. This is why dialogue and discussion are so important.

What should children and young people do when they come across content that worries or upsets them – content that adults would think is inappropriate? We want them to speak to someone – ideally a trusted adult – but if the response to this would be to ban them from a particular platform or to take away their internet access, then this is perhaps not the best approach.

Online pornography is a significant concern for parents. As more and more children are getting their own devices earlier and earlier, and as the mobile phone becomes the most popular device for accessing the internet, it is perhaps inevitable that children will encounter adult content. It is worth noting that although most parents are horrified when they discover that their child has seen pornography, quite often they did not go searching for this content on purpose. A report from the BBFC (British Board of Film Classification, the designated Age-verification Regulator under the Digital Economy Act in the UK) found that over 60 per cent of children aged 11-13 said that their viewing of pornography was mostly unintentional; this figure reduced to 46 per cent of young people aged 16-17.

Experiences from the Insafe network

As seen above, inappropriate content covers a wide range of issues and Insafe helplines categorise it using various headings. Overall, inappropriate content could be said to be the main issue that helplines are dealing with as clearly cyberbullying and sexting (both the subjects of other “Insafe insights…” reports) involve content which is deemed inappropriate.

Inappropriate content is addressed at every Insafe Training meeting. In 2018, there were discussions about children and young people being exposed to inappropriate online challenges which, in some cases, had led to young people self-harming or committing suicide. Known as the Blue Whale Challenge, this was an issue which needed to be addressed quickly. Despite being a hoax, it was clear that young people were harming themselves as a result and authorities faced the dilemma of when to alert people to the issue: do it too soon and you can be accused of scare-mongering; too late and you are not doing your job. Discussions about this type of issue at a network level mean that a consistent approach can be applied, which was seen more recently with concerns over the Momo challenge. Helplines, awareness centres and social media providers met quickly following press coverage in Austria and the United Kingdom to devise a strategy for working with the media to provide useful information around the issue. There was a resurgence of this type of challenge in 2020, when several media outlets reported on a similar online challenge appearing which bore lots of similarities to the original Blue Whale hoax. Fortunately, this appears to have been short-lived and Safer Internet Centres met quickly with social media providers to ensure that rapid action was taken to remove this type of content.

Safer Internet Centres that are part of the Insafe network enjoy positive relationships with many of the key social media providers. This is very important in being able to understand how best to support end users who encounter problems on the various platforms. Colleagues have a good knowledge of community standards or community guidelines and the most effective ways to have problematic content removed. One key piece of advice from social media providers is that users should provide as much information as possible when making a report. The more context that can be provided about an offending piece of content, then the more likely it can be dealt with properly.

Unfortunately, reporting mechanisms on social media platforms have not enjoyed a good reputation in recent years with many users expressing their frustration that they had reported some content, but that nothing happened as a result. Some platforms now keep users informed about action that has been taken on a particular report that they have made, and many platforms are keen to provide as much transparency as possible with regards to reporting processes. The recent COVID-19 pandemic has affected social media companies, just like everyone else, with most staff having to work from home. This has inevitably caused some disruption to the moderation of content and the responses to requests for removal, which have fluctuated quite widely during this period. Also, not surprisingly, the tech companies have relied on artificial intelligence (AI) solutions more, which have not always provided the best outcomes. Strong relationships between the Insafe network and industry have meant that any issues can be addressed quickly, and solutions found.

Similarly, some of the major providers have also released transparency reports which show how they have dealt with specific types of content, how much they have removed, and so on. The figures are sadly quite staggering, such as the volume of traffic that these sites attract.

In terms of the amount of calls that Insafe helplines are dealing with regarding harmful and inappropriate content, the category “potentially harmful content” regularly accounts for 10 per cent of all calls.

Further statistics can be found on the Better Internet for Kids portal.

Insafe helpline case studies

Many of the cases that Insafe helplines deal with could be classified as addressing inappropriate content, as can be seen from the following examples.

A teenage girl contacted the German helpline and explained that some of her friends at school had been talking about pro-ana sites. The girl was curious and went looking and found some sites herself. She started to worry that she was too fat and talked about hearing Ana’s voice inside her head. She started to feel guilty that she was overweight and began throwing up after meals and becoming obsessive about her weight. The helpline counsellor had a discussion with her about the potential risks of pro-ana sites and was very clear about the fact that Ana was not a friend and that anorexia was a serious illness. A range of associated issues were discussed including body image, beauty ideals, reasons for wanting to lose weight, and potential warning signs. The girl was encouraged to seek further help (and suggestions of where she could find this were given) as soon as possible.

Quite often, parents will contact helplines as they are worried about their children coming across or being exposed to potentially harmful content. The concerns often focus on games and gaming and many parents suddenly become aware of the risks when deciding to give their child their first tablet or smartphone. The example below from Luxembourg is quite typical of the types of queries that helplines will deal with regularly from parents.

A mother contacted the helpline to ask for some advice and support. Her son had just moved from primary to secondary school and was now asking for a smartphone. The mother had some real concerns about this and was particularly worried about the possible access to pornographic content, drugs, and contact with strangers. She also had concerns about the dark web and was worried that her son might be able to buy things illegally there. The mother explained that she felt ill-equipped to deal with this as she was not an expert on internet issues and felt overwhelmed. The counsellor provided positive support for the mother and tried to empower her so that she felt able to address her concerns. She was also directed to a number of useful websites in order to find out more about the potential threats and challenges.

Sometimes, really young children make contact as they have been exposed to particularly unpleasant content online.

An eight-year-old girl contacted the Latvian helpline because she was frightened. She had been browsing the internet and had clicked on a hyperlink which led her to a pornographic site. Initially the girl was curious and so clicked on a few of the videos but, after a short time, she was worried and confused. She could not understand what the adults were doing in the videos or why they were doing it. She was afraid of speaking to her parents as she thought she would get into trouble. Because the girl was so scared and experiencing such negative emotions, the counsellor took the decision to undertake some crisis intervention. They explained that clicking on random links could lead to very unpleasant content. The counsellor also explained briefly (and in an age-appropriate way) what had been happening in the videos that the girl had seen. She was then encouraged to speak to her parents if she could and also try to get them to call the helpline so that someone could explain more about filtering and how this could be used to protect children from this type of content.

Insafe resources

Safer Internet Centres have developed various educational resources and awareness-raising videos aimed at helping teachers, parents and carers, and children and young people, to discover the online world safely. A selection of resources touching on issues relating to inappropriate content are detailed below:

Many more resources are available from the Better Internet for Kids (BIK) resource gallery, covering a whole range of online safety issues in a variety of languages.

Further information and advice

For further information and advice, please contact your national Safer Internet Centre (SIC).

To keep up to date with safer and better internet issues more generally, visit this website often, subscribe to the quarterly BIK bulletin, or check out the Insafe Facebook and Insafe Twitter profiles.

Figure 1 description

Overview

The chart shows the main online content-related risks mentioned by children and young people, which are, from most frequently to least frequently mentioned: pornographic or sexual content; violent/aggressive content; unwanted content; scary content; gory content; content about drugs; commercial content; content about self-harm or suicide or anorexia/bulimia; violent pornography; racist content; hateful content; content harmful to self-esteem.

Values

Numerical values presented on the image: What were all the online risks children mentioned?

Pornographic or sexual content19.6 per cent of all risks
Violent/aggressive content15.3 per cent of all risks
Unwanted content7.5 per cent of all risks
Scary content3.4 per cent of all risks
Gory content2.2 per cent of all risks
Content about drugs1.9 per cent of all risks
Commercial content1.6 per cent of all risks
Content about self-harm or suicide or anorexia/bulimia1.5 per cent of all risks
Violent pornography0.9 per cent of all risks
Racist content0.8 per cent of all risks
Hateful content0.4 per cent of all risks
Content harmful to self-esteem0.3 per cent of all risks

 

Presentation

The bar chart represents the online content-related risks mentioned by the children surveyed in the framework of the EU Kids Online project. Each category of online risk is represented using horizontal rows, with lengths indicating for each risk, its percentage of all risks.

Figure 2 description

Overview

The chart shows the amount of children in Albania, Brazil, Bulgaria, Chile, Ghana, Italy, Montenegro, Philippines, South Africa and Uruguay, who have reportedly been exposed to the following online risks: self-harm content; suicide content; hate speech; violent content; sexual content; being treated in a hurtful way; meeting someone face to face.

Values

Numerical values presented on the image: Percentage of children who have been exposed to online risks, by country.

 AlbaniaBrazilBulgariaChileGhanaItalyMontenegroPhilippinesSouth AfricaUruguay
Self-harm content181218151522n/a141822
Suicide content121112121613n/a201816
Hate speech10n/a28211235n/a123435
Violent content35n/a26301833n/a223340
Sexual content16143724392731305136
Being treated in a hurtful way6232920161012232220
Meeting someone face to face16212181991213n/a23

Presentation

The bar chart represents the percentage of children who have been exposed to content-related online risks, as reported by them in the framework of a Global Kids Online survey. For each country featured in the study, each category of online risk is represented using vertical columns of a specific colour, with height indicating the percentage of children who have been exposed to it.

Related news