Yellow: Working with teens responsibly

As part of the wider Better Internet for Kids project, we regularly liaise with the providers of online services to provide feedback on new apps and services, and real-life insight into some of the challenges which children and young people face online when using such products, as communicated through our network of helplines and the like. Equally, we welcome information on the steps being taken by industry to help to keep users safe online. We recently heard from Yellow on their approach.

Date 2017-10-20 Author Yellow Section awareness, industry

"We built Yellow to be a fun and engaging place for people to make new friends. We know that teenagers particularly love to use video and live streaming, and we are striving to offer that, responsibly and as safely as possible. We'd prefer that teens do this on Yellow than moving to platforms that have less safeguards.

We know that as teenagers grow up, they experiment and like to push boundaries and this is a particular challenge for all social media platforms. On occasions as part of their experimentation some teens do behave inappropriately, often imitating the wider culture of TV, music and celebrity behaviour, such as posing in their underwear. Yellow is very clear it has a responsibility to protect young people and constantly develop our tools and processes so young people have boundaries and aren't placing themselves at potential risk.

To do this we are experimenting with a number of things, including directly intervening when young users are breaking the community guidelines. For example, we send users who are posing in their underwear a real-time notification asking them to put their clothes back on, and for the most part they do. Those that don't have their profiles suspended or deleted. We also send alerts and information to the users about the importance of protecting their safety and privacy as they enter live streams.

Creating and running an app that is used by a great number of teenagers means we have a greater responsibility than other social networks to ensure we create an environment with these boundaries. It is a challenge, not least because the way they communicate is constantly changing, but one that we are serious about meeting. For example, they use emoji's instead of language like using a peach for someone's bottom.

Technology for moderating user content is developing fast but, like all measures, nothing is fool-proof. We are committed to exploring the difference Artificial Intelligence can make and are working with others in the industry on this.

We believe education of users and their carers' is equally as important. We trust that the Parents and Teen Guides we've created will be used widely by anyone concerned about teens and their use of Yellow. For the teen guide, we decided the best way to understand teens was to ask them. We conducted a poll with 250 of our users and you can see the top tips for staying safe online that they shared in the guide.

As part of this, we wanted to remind everyone of our policies and share some of the specific measure we've taken to protect our teen users.

  • We have a zero policy on nudes and pictures in underwear, and this is clearly stated in our community guidelines. We have technology and human moderation in place but do encourage our users to report anything that is inappropriate.
  • We use a combination of technology and humans to check in various ways whether people are who they say they are, by monitoring their behaviour over time on the app. Anyone who is found to be lying about their age will be blocked instantly. Those that are under 13 are not permitted to use Yellow so that if someone under 13 enters their correct birth date they are told they are unable to register.
  • We monitor all the titles of Live-Streams to ensure they follow our guidelines and don't contain anything explicit or inappropriate for this age group. This is constantly being improved to incorporate new combinations of emojis that teens use. If the title is inappropriate, the user is messaged by a moderator asking them to change it and are given one minute before the Live is automatically closed. Again this is a new intervention by directly intervening in the real-time use of a user's communication.
  • We had added a pop up for every user entering a Live Stream to remind them of the guidelines.
  • We have banned the use of emojis in profile names that are used to imply something sexually explicit. For example, the aubergine, which is widely used by teens.
  • We have a dedicated team to review reports and communications from parents, teachers and law enforcement and respond directly within 24 hours.
  • We actively encourage everyone — users, parents, guardians or commentators — to report anything at all that they think is suspicious — either inappropriate content or behaviour, or any concerns they have about other users. We've embedded a simple and effective reporting abuse feature in every profile, and we respond to every report. We've made it very easy to unfriend other users and to report anything inappropriate on our safety centre: http://safety.yellw.co/.
  • To support the education of young people and they're being responsible in their behaviour and conduct, we have developed both a parents/educators and teens guide to be used to inform and help all those involved in the care of young people and their protection.
  • We understand that there are safety concerns with any social media service used by young people, and we've invested in employing a child online safety consultant, Annie Mullins OBE, who has expertise in supporting social media companies to improve safety for users on their services, is a member of the UK Government Board for Internet Safety (UKCISS) and has worked extensively on industry self-regulation in Europe to develop standards for social media services, including the UK Good Practice Standards for the Digital industry for child safety online. See thetrustandsafetygroup.com.

As mentioned, we are always looking at ways to improve safety on the app and we are also committed to:

  • Expand our algorithms to further monitor usernames and titles for the incorporation of emojis. We have employed an Artificial Intelligence engineer and one of their first tasks is to build a tool that will enable this to happen.
  • Incorporate identity verification for profiles that have been reported multiple times for being fake. A suspicious user will be asked to prove their identity through sharing of Government-type ID such as a passport or participate in a live chat with us to prove they are the person in the profile picture.
  • Investigate how to detect users depicting drugs in their profiles and messages. If they are reported we ban them immediately and we have hired an engineer to build technology to enable further proactive monitoring.
  • Like every social network, age verification is a challenge and we rely in part on users being truthful about their age. We are investigating technologies and working with others in the industry on this issue. We already have an algorithm in place that matches the age listed in profile to the picture and if they don't match we ban the user but there is more to be done and we acknowledge this.
  • We originally built Yellow with the ability to swipe through profiles as swiping is how most people use smartphones, and other mobile devices, from looking at photos to deleting messages to opening an iPhone. We didn't consider that it might be compared to Tinder, and are very uncomfortable with the comparison, to the point where we are trialling removing the swipe function before the end of this year. Yellow is designed to be a site where teenagers can make new friends online, across the world — a modern day pen pal relationship if you like — and it is absolutely not a dating site.

We totally understand that parents are, rightly, concerned with the safety of the apps used by their teenage children. That's why we're continually looking for innovative ways to improve safety, and we're focusing in particular on developing technology that analyses context and behaviour to alert us to anything suspicious or inappropriate."

This article is republished from a blog article from the safety team at Yellow and is reproduced here with permission. Please refer to the original article also which includes various screenshots from the app illustrating the various points raised.

The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of the Better Internet for Kids Portal, European Schoolnet, the European Commission or any related organisations or parties.

Please see the Better Internet for Kids (BIK) Guide to online services for further information on some of the most popular apps, social networking sites and other platforms which are commonly being used by children and young people (and adults) today.

Related news