IGF 2019 - Tackling online hate speech is a multi-stakeholder responsibility
- BIK Team
On Thursday, 28 November 2019, the Insafe network hosted a workshop at the annual Internet Governance Forum (IGF) in Berlin, in cooperation with the Federal Ministry of Justice and Consumer Protection of Germany, on the topic of online hate speech. During the session, around 90 participants and high-level speakers discussed what more can be done to tackle hate speech online.
High-level statements from policy makers, industry and practitioners
After a series of short statements from the high-level panelists, Thomas Blöink, Head of Subdivision at the Federal Ministry of Justice and Consumer Protection, kicked-off the discussion by presenting the German Network Enforcement Act – a law put in place by the German government in June 2017 requiring social networks with over 2 million registered users in Germany to exercise a local takedown of "obviously illegal" content within 24 hours after notification. Where the (il)legality is not obvious, the provider normally has up to seven days to decide on the case.
Following this policy example, Chan-jo Jun, advocate for IT Law, illustrated with examples and outlined the case of his law firm defending a refugee who took a selfie with German Chancellor Dr. Angela Merkel and received quite a lot of hate messages as it repeatedly showed up in fake news reports linking him to terrorism. Even though the refugee in question reported this case to Facebook multiple times, the picture and according comments were not taken down, as according to Facebook it did not violate the community standards. For this reason, the victim turned to Jun's law firm, which helped him bring the case to court. Since it was covered by the policies of the German Network Enforcement Act, the picture was taken down.
Austrian journalist and author Ingrid Brodnig contributed a further example about a story of Vienna residents of African descent who had supposedly committed rape. This story turned out to be fake, but the damage was done; this highlights how many people spend little time looking at the accuracy or authenticity of a piece of news, instead simply reacting emotionally.
These two cases pose several basic legal questions : who is responsible for content posted by anonymous sources? Ca, a person reposting hate speech content (whether it is text, or image) they did not produce themselves be held responsible for defamation?
To conclude the high-level round, Sabine Frank from Google provided further details of how Google tackles hate speech online. 500 hours of content is uploaded to YouTube every minute, and on average, one per cent of this is illegal or violates Community Standards. Google has achieved great progress with machine learning to help detect illegal content such as child sexual abuse material (CSAM), spam and more recently terrorist content. Moreover, in the second quarter of 2019, 9 million videos were removed, with 78 per cent identified by machine learning before anyone viewed the content.
Interactive group discussions to foster a multi-stakeholder approach
Following these high level statements, workshop participants split into four different groups discussing the following questions:
- How can children's rights to participation, access to information, and freedom of speech be preserved and balanced with their right to be protected from violence, exploitation and abuse in the online environment? This table discussion was facilitated by Kathrin and Joao, Better Internet for Kids (BIK) Youth Ambassadors.
- How can children's resilience be increased by means of capacity building, media literacy, support and guidance in the digital environment? This table discussion was facilitated by Sofia Rasgado from the Portuguese Safer Internet Centre (SIC).
- What role should internet platforms play in defining the standards for acceptable content in light of freedom of speech? This table discussion was facilitated by Ricardo Campos from the University of Frankfurt and Lawgorithm (Sao Paulo).
- How can cooperation and collaboration on national, regional and global levels help to counteract hate speech online? This table discussion was facilitated by Carolin Silbernagl from Das NETTZ.
The session concluded with each table facilitator summarising the highlights of each group discussion. The main takeaways include:
- Online platforms need to develop concrete social media Community guidelines that are accessible and easy to understand for children and young people.
- For children and young people to become critical thinkers, digital literacy education should be a mandatory part of the school curriculum, also including the ethical dimensions, human rights, and so on.
- Media literacy and online safety education is key, not only for children and young people but also for adults.
- An equilibrium needs to be created between the interests of governments, internet services providers and the wider public.
- The aim should be to move forward towards international standards to share common guidelines that would make it easier to interact and to build accountability towards the big social media platforms.
- Hence, the goal is not to harmonise an international set of laws, but to build on shared principles, a shared basis, which still allows for national and regional diversity, as no single stakeholder group can solve this on their own.
For further information about the workshop consult the transcript on the website of the Internet Governance Forum.
- BIK Team
The Insafe network held a day 0 event at the Internet Governance Forum (IGF) in Berlin, taking place from Monday, 25 to Friday, 29 November 2019. The workshop was entitled "How is life in the digital age treating us? Opportunities and risks for people's wellbeing".
- BIK Team
Monday, 25 November 2019 marks the start of the Internet Governance Forum (IGF) in Berlin, Germany. This year, an Insafe delegation will host two workshops; one on opportunities and risks of digital technologies for people's wellbeing and one on hate speech.