The ability to create fake and edited images of people take a huge leap forward with the introduction of deepfakes around 2017. More recent advancements in AI have now made it even easier for anyone to create fake images, video and audio of people, even with little to no technical skill.

This deep dive will explore what deepfakes are, the ways in which people create and use them online, and the risks they may pose to children and young people. The importance of media literacy will also be outlined, along with advice on how to equip your learners with strategies to deal with deepfakes.

Visual identity representing two individuals that are almost identical.

What are deepfakes?

Deepfakes are typically images or video content created by combining or superimposing other images/videos onto them, to create a finished image/video of something that did not occur. Historically, the technology has often been used to face-swap people in photos and videos; mainly celebrities (see the next section for an ingenious but slightly disturbing example!).

However, deep fake content can also include creating audio that sounds like the voice of famous person saying things that they have never actually said.

How is this content created? Through the use of powerful AI tools to manipulate existing content or create brand new images, videos and audio that is completely fake but appears genuine to most people.

Early deepfake technology required technical skills and training of AI on many images/videos/audio of a person in order to replicate it. In the case of celebrities and politicians, there is plenty of existing content online to train an AI deepfake tool, but much less existing content of an everyday citizen. With advancements in AI technology, deepfake content can be created with very little source material, meaning that anyone could be targeted and fall victim to having their image or likeness misused without their awareness or consent.

How are deepfakes used online?

There are a number of reasons why people create and share deepfakes online, with positive and malicious motives underpinning their use.

A classic example of early deepfake use (for harmless comedy) was the trend of swapping every actor’s face with that of Nicholas Cage in clips of popular films and TV shows.

WARNING – you may find the following video a bit creepy!


Over time, more sophisticated face swaps emerged – this Tom Cruise deepfake used AI to face swap onto an actor who was able to accurately mimic Tom Cruise’s mannerisms and voice. The popularity of using deepfakes for entertainment even spawned a 2023 TV show in the UK where every character is a deepfake version of a popular celebrity or sportsperson.

Sometimes, people may create deepfakes just to show off the current level of technology  and what it can do (such as, or they may create content for the purposes of attracting attention on social media (views, likes, shares, followers, etc.). The technology has also been used to create awareness and to communicate; like this example of footballer David Beckham speaking in nine languages.

However, early adopters of deepfake technology quickly recognised that it could be put to use for malicious purposes, including deepfaking people into pornographic content (to create images and videos of acts that never took place), to creating deepfakes of politicians and world leaders saying or doing things that they would not be expected to say, things that might be untrue, or even things that could pose risk of harm to people.

You can learn more about how deepfakes are created and used by checking out the Deepfake Lab, part of The Glass Room online exhibition.


Using The Glass Room exhibition and your own research online, find and select 3-5 deepfake examples that would be suitable to show to your learners. You may wish to look for examples that vary in quality, from obvious to realistic, to help your learners identify the clues of a deepfakes.

What are the possible risks around deepfakes?

  • Bullying – creating deepfakes that show someone doing or saying something that they didn’t could be used to spread rumours or implicate people in events or actions (some of them possibly criminal) that they never took part in. This could be part of a wider campaign to harass, intimidate and upset someone, or damage their reputation.
  • Extortion/exploitation – deepfakes may be created for sextortion or exploitative purposes. For example, deepfake pornography of a woman could be used to blackmail them or used as part of a campaign of gender-based violence.
  • Child Sexual Abuse Material (CSAM)– the rise of generative AI has made it easier for child sexual abusers to create deepfake sexual content involving children. Although fake, this content is illegal (just like real CSAM) and can cause lasting harm to children who are targeted.
  • Scams – deepfakes of famous people have been used without their knowledge to promote products in advertising or scam people into signing up to fake schemes or to give away sensitive personal data. Audio deepfakes have been used against businesses to instruct staff to make payments to a scammer because an audio message from the boss told them to!
  • Intellectual property infringement – there is an argument that creation of deepfakes of celebrities infringes on their copyright – most celebrities take out a trademark on their photographic image to prevent it being used in images and videos without their consent. Anyone creating deepfake material of a celebrity could run the risk of copyright infringement.
  • Disinformation – As some of the earlier examples in this deep dive showed, the potential for disinformation is huge if deepfake content of celebrities, influencer and politicians is created in order to trick people into believing false information or extreme ideologies, or into taking action that may pose a risk of harm to themselves or others.
  • Threats to society and democracy – deepfake content of politicians and world leaders could threaten the democratic process, not only through fake instructions on how to vote or act, but the awareness that deepfakes exist can lead people to become distrusting of any official communications. This could lead to ignoring important advice or guidance from governments and public bodies around health, safety and democratic processes. This normalisation of deepfake content (where genuine content can easily be dismissed as deepfake if it implicates guilt) is known as the ‘liar’s dividend’.

How can you spot a deepfake?

  • Glitches – Most deepfake technology is still not perfect – it can produce visual glitches, errors and inconsistencies that may give away that an image or video has been altered. This Deepfake Spotter Guide from The Glass Room provides some useful clues to look out for.
  • Audio mismatch – in videos there may be a mismatch between the movement of someone’s lips and the speech, or differences in tone or sound when fake audio is inserted in amongst real audio.
  • Out of character or impossible – images and videos that depict famous people in places/situations they could never have participated in are obvious clues, but behaviour or words that seem very out of character for a person could also be signs of a deepfake.
  • Detector tools – there are tools in development and available to the public that can be used to scan video content for evidence of deepfaking. As this is an area of constant evolution and development, there is no guarantee these tools are (or remain) accurate, but they are useful in checking video content for clues.

AI-generated photo of Donald Trump created by Eliot Higgins using Midjourney v5

AI-generated photo created by Eliot Higgins using Midjourney v5.

  • Factcheck – deepfakes of world leaders or famous people (such as the example above) seen on social media can be factchecked by checking trustworthy news websites – are they covering the ‘story’? Other research online can also help uncover whether controversial images of high-profile people are real or deepfake.
  • Other clues – AI is improving all the time but there are still some elements it has yet to master – some of the hands in the image above are out of proportion and lacking the right number of fingers! Looking for clues like this can help spot a deepfake but…the current pace of AI development means that these clues/mistakes may not exist in deepfake content of the near future! 

What can I do to support my learners?

As the previous section demonstrated, there are strategies that can be used to help attempt to spot a deepfake. However, your role as an educator in supporting your learners is also crucial. Things you can do include:

  • Regular discussion – deepfakes and AI generated images of famous people are now a daily occurrence on social media. Your learners may hear and see these regularly. Taking time to discuss these examples when they appear (and even possibly look at them together) gives you an opportunity to help them figure out how and why these examples are fake rather than genuine.
  • Critical thinking – media literacy skills play a huge role in judging deepfake content. While the skills needed for spotting a deepfake can vary (and will change over time as AI technology continues to improve), the foundations of media literacy will still always hold value. Encourage your learners to be curious and ask questions, to not trust things at face-value and to develop research skills to check the validity of what they see online.
  • Question motives – alongside the previous point, encouraging learners to always consider the possible reasons for why a deepfake image/video/audio exists, and what the creator wants you to do as a result of seeing/hearing it, is always an important step. It can lead to spotting a scam or deception more quickly, and allow young people to consider what they will do next after experiencing deepfake content.
  • Explore reporting tools – large social media platforms are starting to take more steps to combat disinformation, including deepfake content. But your students can also help these platforms by reporting any content they believe to be fake or deceptive. Exploring where to find these reporting tools (and how to use them) also equips your learners with a strategy for dealing with any content online that may be offensive, upsetting or harmful.
  • Understand privacy tools – while deepfake content largely focuses on celebrities, the potential for it to be used to harm your learners is always present. Helping them to use privacy settings on social media and other platforms to protect their personal information, particularly their photos and videos, can help reduce the likelihood of their content being misused. Privacy settings can also help them manage contact with other users.

Further information and resources

Want to learn more about deepfakes? These resources may be useful:

  • Better Internet for Kids resources – Educational resources from across the Insafe network of Safer Internet Centres. You can search for ‘deep fake’, ‘deepfake’ or ‘artificial intelligence’, for resources in your language and for resources for different age groups.
  • CO:RE Evidence Base – A database of publications and research on youth online experiences. Searching the database with ‘deep fake’, allows you to browse and read relevant research related to this issue.
  • BBC Bitesize – Try the monthly quiz of ‘Artificial or Real’ to test your (and your students’) abilities in spotting deepfakes and AI generated images.
  • Be MediaWise – This lesson on deepfakes and disinformation can help students develop their skills to identify deepfake content.
  • Common Sense Education – this lesson plan explores how deepfake content may present a threat to democracy and society.