Identifying the concern around AI
Nowadays, it can be quite challenging to identify AI-generated content among all the content we come across online, and it can be difficult to distinguish it from real content. AI technologies have been quickly progressing in terms of the potential and quality of the material they are able to generate, especially visual and video content.
As social media have become the primary sources of news and information for many people, the virality of content on these platforms, as well as the fact that the sources of such pieces of content are oftentimes untraceable or untrustworthy has made it easier for misinformation and disinformation to spread quickly to a broad audience.
Actually, misinformation/disinformation have become one of the biggest concerns related to any media literacy initiatives, and are considered to be one of the biggest threats to modern democracies – especially when election periods are approaching, as it is the case for the upcoming European elections in June 2024.
This requires a change in priorities: an immediate need to develop new tools and strategies to help detect, remove, and debunk disinformation. This includes, for example, fact-checking tools, social media bots, and artificial intelligence (AI) systems.
How do Europeans feel about AI-generated content?
According to a survey by EBU based on GWI Zeitgeist May 2023 (four EU countries surveyed: France, Germany, UK, Italy), the sample population struggles to identify AI-generated content on the internet. 58 per cent of internet users aged 16-64 think that they interact with AI-generated content, and one in five believe they do so at least daily. However, the level of confidence in their identification of AI-generated content is not high, only 21 per cent are very or extremely confident.
Overall, more than a quarter of internet users aged 16-64 are not sure if they interact with AI-generated content. This shows that one of the challenges faced with AI, is to make sure people can differentiate content created by humans and AI-generated content.
In addition, more than half of the surveyed internet users aged 16-64 are worried that AI can easily be used for unethical purposes. However, only 35 per cent are concerned about how AI tools are being developed (vs 31 per cent not concerned). A majority believe AI tools will help improve the workplace (42 per cent) and find customer chatbots helpful (43 per cent).
While AI offers an undisputable number of new opportunities and tools, it is essential that policymakers actively take efforts to foster general education on AI and media literacy, support funding for research and innovation. A cooperative approach is required among different institutions, including AI developers, social media platforms, government bodies and media publishers and editors.
Where does the EU stand on AI literacy?
The EU has been seeking to ensure that Europeans can trust what AI has to offer, and while acknowledging the potential it has to contribute to solving many societal challenges, it is also aware that certain AI systems create risks that need to be addressed to avoid undesirable outcomes.
- The AI Act, the first-ever comprehensive legal framework on AI worldwide, aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI. At the same time, it seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs). These new rules seek to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles. In 2021, the European Commission presented its proposal for the AI Act, which was recently adopted on 13 March 2024. While some provisions will already apply shortly after the adoption of the Regulation, others will only be applicable at the end of a transitional period. For this reason, the Commission is initiating the AI Pact, seeking the voluntary commitment of industry to anticipate the AI Act and to start implementing its requirements ahead of the legal deadline.
- AI is also mentioned in the European Declaration on Digital Rights and Principles, presenting the EU’s commitment to a secure, safe and sustainable digital transformation. The EU wants to make sure that new technologies like artificial intelligence, data analytics, and the Internet of Things respect everyone’s rights, and that users know when they are interacting with artificial intelligence and algorithms, which some websites use to show personalised content, and they do not make choices on the users’ behalf.
- Lastly, the Commission is tackling the spread of online disinformation and misinformation to ensure the protection of European values and democratic systems more generally with the EU strategy on disinformation. It includes a series of actions plan such as the Action plan against disinformation, the European Democracy Action Plan and the 2022 Code of Practice on Disinformation.
Other projects and initiatives
In recent years several relevant initiatives have emerged, such as:
- DigComp 2.2 (Digital Competence Framework for Citizen) developed by JRC (Joint Research Centre from the European Commission). The DigiComp provides a common understanding of what digital competence is, providing more than 250 new examples of knowledge, skills and attitudes that help citizens engage confidently, critically and safely with digital technologies, including new and emerging ones such as systems driven by artificial intelligence (AI).
- The EDMO (European Digital Media Observatory) network, together with its 14 EDMO Hubs, is engaged in exploring the risks of artificial intelligence with respect to the impact and scope of online disinformation, as well as the opportunities it opens for the development of new AI-powered technologies facilitating its detection and understanding.
- A selection of EU-funded project on AI in dialogue with EDMO can be accessed here. The projects include research on AI methods for countering online disinformation – more specifically, the focus of ongoing research is on the detection of AI-generated content and the development of AI-powered tools and technologies that support verification professionals and citizens with content analysis and verification.
- On the side of industry, one example is AI for cultural heritage (Microsoft). Microsoft’s initiative focuses on using artificial intelligence and technology to preserve and promote cultural heritage. This initiative can enhance media literacy by providing access to historical information and fostering digital storytelling. Microsoft states they will support specific individuals and organisations through collaboration, partnership, and investment in AI technology and resources.
Don’t miss out on the publication of the MediaSmartOnline campaign materials: check the campaign page regularly and follow the #MediaSmartOnline hashtag to see the campaign roll out on social media.