Safer Internet Forum 2019 - Using AI as a solution to online violence

On Thursday, 21 November 2019, the Safer Internet Forum (SIF) took place in Brussels, Belgium. With a theme of "From online violence to digital respect", it also celebrated 20 years of safer/better internet funding by the European Commission. Below, read the summary of a deep dive session on using Artificial Intelligence (AI) as a solution, led by Julie Dawson, Director of Regulatory and Policy at Yoti, and Milan Zubicek, Government Affairs and Public Policy Manager at Google.

Date 2019-12-17 Author BIK Team
picture

Hans Martens from European Schoolnet chaired this session, setting the context by stating that, nowadays, it would be difficult to organise an edition of the Safer Internet Forum without a session on artificial intelligence (AI). AI is everywhere right now; we hear about the opportunities and solutions for cyber challenges provided by AI approaches but, at the same time, there are concerns about having AI embedded in the systems and platforms we use. Inclusive design approaches, safety by design, privacy by design, and attention to accessibility issues are all important in delivering better and safer online experiences for children and young people (and indeed all users), and should therefore be a priority in AI-based approaches. Hans commented, however, that this session would have a specific focus on using AI as a solution to some of the challenges encountered online.

Julie Dawson, Director of Regulatory & Policy at Yoti, kicked off the session by giving an overview of the Yoti solution and some of its current applications. As a global identity platform, Yoti provides services in more than 175 countries accepting 1,000s of government identity documents as proof of identity; as a result, 1,000s of businesses now accept Yoti as part of a trusted network. The premise to the service is that once a user creates their Yoti identity, it can be used multiple times in multiple settings and applications. The users' details are encrypted into unreadable data that can only be unlocked by Yoti; nobody else can access or decipher it, not even Yoti staff. Images are instantly deleted after the age estimation takes place. The anonymised face and year of birth data then helps to build the neural network to further improve the accuracy of the technology; however, all subjects have the option to opt out from this usage.

Yoti started out as a free-to-download consumer app, but has now developed into a B2B application also. As such, there are now two main approaches to employing the technology:

Yoti app and trusted network
Provides identity verification, age verification, biometric e-signatures, biometric authentication, and access control. The processing takes place within Yoti's private, secure identity platform. A user can create their own trusted identity once and use it many times.

"Powered by Yoti" technology
In this application of the technology, service providers can embed Yoti into their existing web or mobile services. Services provided include document scanning, facial recognition, liveness detection, age estimation and voice recognition.

Ultimately, Yoti wants to be the identity verification provider of choice but still maintains a human element alongside the tech environment. Yoti are mindful, however, that there is no silver bullet to identity verification; in brief, whatever is being done now is still not good enough. For this reason, they seek to act differently in this space; to both scrutinise and invite scrutiny.

Yoti has an internal ethics and trust committee who oversee the development and implementation of the company's ethical approaches and provides a "guardian" responsibility, acting as extra eyes and ears internally to raise any issues. The group has very clear principles which they adhere to, namely:

  • Always act in the interests of the user.
  • Encourage personal data ownership.
  • Enable privacy and anonymity.
  • Keep sensitive data secure.
  • Keep the community safe.
  • Be transparent and accountable.
  • Make Yoti available to anyone.

Julie Dawson then went on to provide an overview of some of the Yoti solutions, include age scan which has many applications such as social media, online dating, retail and e-commerce, gambling, gaming and e-sports. In doing so , Yoti is a signatory of the "Safe Face Pledge", which aims to:

  • Show value for human life, dignity, and rights.
  • Address harmful bias.
  • Facilitate transparency.
  • Embed commitments into business practices.

The algorithms behind the technology are constantly being monitored for accuracy by age, gender and skin tone, and accuracy is continually improving. A Yoti age scan white paper is available for those who want to know more.

Yoti is being used in some specific environments to protect children and young people. To give an example, a key challenge for Yubo (a live streaming app for teenagers with over 20 million users globally, which aims to help users make new friends) was to make sure only users from the right age group can chat together: 13-17 years and 18+ have separate communities on the app, and under 13s aren't allowed to use it. Enforcing these age groupings is a critical priority for Yubo. This is why Yubo joined forces with Yoti.

As part of its safety measures, Yubo uses Yoti's age estimation and verification solutions to detect risky accounts and create a safer and more-trusted environment. The result has been that over 50 million users have been age scanned, and thousands of accounts have been suspended. Equally, hundreds of Yubo users voluntarily verify their identity everyday using the Yoti tools. The typical process is as follows:

  1. Yubo uses Yoti Age Scan to analyse users' profile pictures and flags suspicious profiles to its moderation team.
  2. Yubo's moderation team reviews flagged accounts and can decide to suspend them. Suspended accounts are required to verify their identity via Yoti in order to continue using Yubo.
  3. Every Yubo user has the option to verify their picture and date of birth via Yoti. Verified users are rewarded with a yellow "verified" tick on their profile.

Other applications are being developed, such as age gating content at 15 and 18, examining unintended consequences of verified profiles, and looking at the age of victims and perpetrators in child sexual abuse cases.

In conclusion, Julie Dawson identified some key challenges in this space going forward:

  • The language around the practices of facial detection/recognition/matching etc., and public awareness of that is important to help build trust – common language is important: one to one, one to many, surveillance, etc. FPF – the Future of Privacy Forum – is conducting some research in this space.
  • Building consented data sets (using data from over 13s only).
  • Consideration of what different skill sets are needed, and establishment of oversight organisations (ethics committees, researchers, consumer rights, human rights, online harms, and so on).
  • Consideration of where AI applications could it be useful going forward.
  • Examination of different approaches that border AI approaches.

Next up was Milan Zubicek, Government Affairs and Public Policy Manager at Google. Milan Zubicek commenced by stating that Google has a strong experience with using AI and machine learning – search tools can boost the performance and functionality of Google's products and services. They are also learning what the challenges and issues are linked to this space, and how Google can tackle these. As such, Google has established some AI principles which guides its ethical development of AI solutions. These include:

  • AI should be socially beneficial.
  • AI should not contain bias.
  • Safety and privacy should be integrated by design.
  • There must be accountability in all AI uses.
  • All AI applications should seek to deliver scientific excellence.

Google's AI solutions are only available for uses that accord to the above principles. Conversely, Google's solutions should not be used for:

  • Anything dangerous.
  • Mass surveillance.
  • Anything which is in contravention of national laws.

All of these principles are tested over time, and Google works with various sectors to conduct ongoing research and development. Equally, Google has developed various tools to test the new developments. One such example is the Google "What-If Tool"; a tool to test what the result of machine learning might be if the input data is different.

Google try to be transparent across all areas of this work, sharing principles, research data, data sets and tools with the wider community, and making APIs (application programming interfaces - a set of functions and procedures allowing the creation of applications that access the features or data of an operating system, application, or other service), such as those for translation and image analysis. This mean that others can utilise and benefit from the advances which Google has made.

Milan Zubicek then provided a few case study examples. Every minute, 500 hours of video content are being uploaded to YouTube. In this case, the scale presents the challenge; content cannot be moderated by human moderators alone, and machines are needed to detect and flag problematic content. Removal still requires human intervention – there is not yet the nuance in machine learning to determine, for example, political speech versus hate speech. When human moderators respond to flags, that data is fed back into the system to further improve the machine learning.

Another use of machine learning is to tackle hate speech in the comment sections of publisher websites, such as the New York Times. Machine learning can measure the level of toxicity of comments; publishers can then make a choice of how much toxicity they allow through to their public platforms, and prevent the most harmful comments from being published. The tools can also provide direct feedback, telling the user that their comments will not be published in their current form, and giving them the option to rephrase.

The floor was then opened to questions. One participant commented that, under new legislation, there will be more responsibility for platforms such as YouTube to show age levels for sensitivity in content. What are Google doing in this respect? Milan Zubicek responded that Google's policy teams are working with local experts, NGOs, policy makers and similar on local implementation in all Member States.

Another participant asked how age verification is being used to make sure platform users are the appropriate age. Milan Zubicek responded that Google provide tools to users (controls), and try to educate parents. Google Familylink, for example, gives control to parents on how their children can spend time on YouTube. Google are also developing specific tools/platforms of relevance for younger users (such as YouTube Kids). They are also exploring providing more relevant content for different age groupings of young people.

When asked how machine learning can limit the biases that are very close to human nature, Milan Zubicek gave an example of Google image where a search on "famous scientists" would typically present images of middle-aged white males, with grey hair. Although the images presented are factually correct, this is possibly strengthening the bias prevalent in human nature. He then gave another example of coffee mugs – the direction in which the handle is facing in images, for example, can reinforce the bias that most people are right handed. The algorithms can obviously be tweaked, but what is the right ratio? These sort of issues can't be solved by Google alone.

On a question regarding transparency, Julie Dawson commented that companies need to say how good or how bad things are. When Yoti first published their white paper back in January 2019, they were flooded with comments along the lines of "the results aren't especially good, but at least you're putting it out there". A key challenge in terms of external benchmarking is that there are no comparable benchmarking organisations in Europe. Julie Dawson went on to comment that, in some respects, Yoti benefits from being a smaller company that has started from the ground up – for example, this has allowed it to set up an ethics committee "with teeth". Some things are easier in a small company; but many of the larger organisations have been very helpful, while also bringing invaluable scrutiny to the work which they are doing. Milan Zubicek concluded by echoing Julie Dawson's comments. Understanding the concerns, debunking misunderstandings, transparency, explainability, and so on is key. Google are working to develop as a company, but in collaboration with wider industry also. Google seeks to set high standards for others to follow, but equally they work with – and learn from – smaller companies in this space.

For more information about the Safer Internet Forum 2019 "From online violence to digital respect", you can read the full report on betterinternetforkids.eu and visit betterinternetforkids.eu/sif2019.

Related news