Children and young people live in a digital society that presents many opportunities and challenges in their everyday lives. Social media, communication apps and many websites offer the ability to interact with others, consume information but also contribute information and data. Almost everyone can produce information and “create content”, but how do we judge what (and who) is reliable?
This deep dive will provide an overview of information disorders (such as misinformation), the potential impacts they can have on the safety and wellbeing of youth, and what you can do as an educator to support your students to develop their media literacy skills.
What types of information disorder are there?
Information disorder can take three main forms:
Misinformation | … is false or inaccurate information, that is shared without an intention to harm or deceive. |
Disinformation | … is when false information is knowingly shared to cause harm. |
Malinformation | … is when information based on reality is shared to cause harm, often by moving information designed to stay private into the public sphere. |
You have likely already heard of misinformation and disinformation, but malinformation is less well-known. This form is often used to cause emotional or reputational damage, as a means to exert a change in power balance in a relationship, or to extort a target. Many forms of malinformation may be purely for personal gain.
What types of mis- and disinformation exist?
The above graphic identifies seven main types of mis- and disinformation:
- Satire or parody – potential or intention to fool but not to harm.
- False connection – Headlines, visuals or captions that do not match the content it appears alongside.
- Misleading content – Using information in a misleading context about an issue, new story or individual.
- False context – Sharing genuine information in a false context, such as using accurate facts to support a false conclusion.
- Imposter content – Impersonating genuine sources or figures with the aim to deceive.
- Manipulated content – Altering genuine information or content (such as images or videos) in order to deceive.
- Fabricated content – Content that is entirely false, in order to deceive and harm others.
You will notice from the graphic that the types are ordered from low to high in terms of potential harm.
Activity:
There are other forms of misinformation and disinformation that might fall within/across the seven types, or sit in their own category entirely.
Consider these additional forms and decide where you would place them on the scale shown in the previous graphic, based on the potential harm they might cause to children/young people:
- Fake news (news stories that may contain little or no accurate facts)
- Deep fakes (edited video where one person’s face is replaced with another)
- AI generated photos (Images created by artificial intelligence based on a given description)
- Conspiracy theories (Theories usually centred around famous people or events in history, suggesting they are different from how they appear)
- Clickbait (Adverts or thumbnail images that encourage users to click through to a news story/article, to discover it is very different to the original image)
- Memes (Images accompanied by a humorous caption, which often differs from the meaning of the image)
You may be able to think of other online content related to misinformation or disinformation that sits on this scale.
How is misleading information spread?
Both human behaviour and technology can play a part in the spread of false or misleading information:
Human behaviour
Human behaviour is a crucial factor in determining if and how misleading or false information may spread online. There are several different behaviour related factors that may facilitate the spread of information disorder:
- Trolls - Some online users will deliberately spread false information (or challenge the authenticity of accurate information) in order to create conflict rather than encourage discussion. This happens by posting inflammatory or digressive content with the intention of provoking other users into an emotional response.
- Echo chambers - If online users only surround themselves with people who share the same beliefs, this may create an ‘echo chamber’ – where those of a like mind reinforce a single viewpoint to the exclusion of alternatives. This can create a false impression that an opinion is more widely held in society than it actually is and can significantly strengthen existing beliefs. Learn more about echo chambers with this video:
- Circular reporting – A phenomenon where a piece of information appears to originate from a number of different sources but is actually from a single source. It can happen when, for example, X publishes misleading information, which is then reprinted by Y. The original information is then credited to Y by publication X. When many publications all report on the same piece of misleading information, it is also considered circular reporting. As a result, this information then appears to be verified by many authors.
- Fraudsters and scammers - Cybercriminals frequently use social media to promote fake advertisements or articles. These ads and articles often have a convincing appearance and try to imitate real ones. The majority of these types of scams are motivated by money. There are two major ways scammers try to get your money: persuading you to invest/buy in unnecessary or fake products/services, and stealing personal data that allows them to financially benefit.
Technology
While human behaviour often plays a vital role in the dissemination of misleading and false information online, there are number of features unique to online services and technology that also assist the spread of information disorder:
- Algorithms and filter bubbles - Popular social media platforms and video sharing sites track and collect data on what you watch and do on their services, and who you are. This data is used by algorithms to present you with content you might be interested in. In other words, the content you see on your social media platforms becomes highly personalised due to these algorithms. In this way, the more you encounter misleading information about a certain topic on your social media platforms, the more you will be exposed to similar posts. This can create a ‘filter bubble’, where you end up seeing a narrow range of information or opinions based on your previous interests and views. Want to understand the origins of filter bubbles? Check out this TED Talk by Eli Pariser.
- Search engine optimisation - A technique of boosting the quality and quantity of search engine traffic to a website or a web page. Groups wishing to spread false information online or promote their own beliefs have become increasingly skilled in manipulating search engine results to get that their website or social media profile appears higher up the list. By achieving this, they have more views and a greater reach. They may do this by ensuring that their websites or content contain popular keywords that help them appear in more search results (and end up higher in the list of search results). They can also post or upload content frequently so that a search engine will recognise it as recent or relevant.
- Fake advertisements - Fake ads are often used to lead you to websites containing misleading information. Advertisements featured on high-profile sites like Facebook, Instagram or Google don’t have the same rigorous vetting procedures as advertisements that appear on TV, radio or in print media such as newspapers and magazines. This article explains more. These fake ads may not only trick you into buying something questionable or dangerous, but you could also become a victim of a financial scam.
- Persuasive design - Features such as ‘like’, ‘retweet’ and ‘favourite’ are all used to encourage users to interact with the online content they see. By using these features, users may increase their own likelihood of seeing similar content in the future. Content with lots of social interactions (such as likes and comments) is also more likely to continue to spread as some users may believe a popular post to be a trustworthy post, and share it further.
- Bot networks - Bots are small programs used to perform specific online actions that mimic human behaviour (e.g. sending messages, liking posts, retweeting, following other accounts, etc.). These can be used to continuously post false information as well as use false information to respond to other (real) users, increasing the chances of someone encountering it online. Using bots to create fake accounts and follow other accounts can be a quick way to provide a large following to an online account, which can convince some users that it is credible when it is not. Want to learn more? Check out this article.
Why do people share misleading information online?
There are a number of reasons why online users may share misleading content or contribute to its spread:
- Validation of beliefs - If a user encounters online content that validates their own beliefs and ideology, they are more likely to share it with others online. They are also more likely to take the information at face-value rather than critically assess it, as it confirms their existing views.
- Furthers a motive - Some online users may choose to share false information if it aligns with some form of personal gain that negatively impacts on others e.g. it enables them to run an online scam (to extort money or data), it provides opportunity for hate speech, it furthers a political agenda.
- Lack of challenge - If false or misleading information goes unchallenged, or if the challenge against it is ineffective, this may be interpreted by other users as an indication that the content is accurate and trustworthy. This may encourage them to also share it with others, thus increasing the reach of the message.
This study by Buchanan (2020) found that users most likely to share misleading information online did so because they believed it to be true, or because it aligned with their existing beliefs.
What are the motives for producing misleading or false content?
Spreading misleading or false information is known to be disruptive, potentially destructive and harmful to individuals, groups and society. So, what drives people to produce this kind of information?
- To make money - If you see misleading news items with shocking titles on social media, you are more likely to click on them. These clicks generate a lot of advertising revenue for websites. But fake news is also used to sell products, for example: a miracle cure for an infectious disease or a product that a celebrity is supposedly very enthusiastic about.
- To acquire data - Misleading content may encourage users to interact with it in a way of capturing data about individuals and their behaviour, such as login credentials, other identifying information, financial details, contact details, etc. This may be used to encourage other similar behaviour in the future.
- To scam people out of money, data or property - Some forms of false information are deliberately used by cybercriminals to trick people into making payments or sharing personal data. These scams often involve emails or messages that appear to be from a trusted company asking for personal data, but actually leading to fake websites designed to steal the data.
- To promote ideas/beliefs - Misleading or false content may be used as a method to influence an online user’s beliefs and ideas, usually in an attempt to align them more closely with the beliefs of the content creator. This could be done to influence religious beliefs or political views, or to undermine trust in other selected groups. In the last few years, there is growing evidence that some countries may have interfered with the political processes in other countries, using social media to spread false and misleading information that might affect the electorate.
The motives mentioned above can intertwine with each other. Some misleading information might have multiple purposes. It is likely that some publishers or distributors of a message may not even be fully aware of the real motive behind some posts.
What are the risks around information disorder?
The risks posed by misleading or false content can vary from individual to individual. For children and young people, their age, development and online experiences can also affect the likelihood and nature of risk from information disorder.
Here are some risks that are common to many online users:
- Financial loss - Individuals might be the victim of an online scam, phishing, etc. and lose large amounts of money.
- Personal data theft - People’s data and personal information can be exposed by people who produce misleading content.
- Believing false information - As some users may exist in a filter bubble, they might see only the news they want to see. Some of this news can be fake news, but might be perceived to be real and trustworthy.
- Drowning in irrelevant content - Individuals might experience difficulties in knowing where to get the most relevant facts about certain topics or events because of the overload of information online.
- Risks to mental health and emotional well-being - Disinformation can harm our mental health because it’s purposefully manipulative and designed to cause anxiety. The level of untrustworthy content can also overwhelm people and increase anxiety.
- Risks to physical health and safety - Misleading information about health, medicines, exercise and diets can be dangerous for people who are sensitive to these kinds of information. This kind of information can greatly endanger a person's health.
How can you spot false or misleading content and information?
The wealth of information online can make it difficult to always identify what is fake or misleading. However, these key questions from ‘News in the Classroom’ can help encourage your students to ask themselves whenever they encounter online content that they are unsure about:
- Is the title neutral? Is it click-bait?
The title doesn't always say it all. For example, did you know that titles online are sometimes modified to get more clicks? Or that titles also often include quotes? - Who is the author?
Is there an author listed? Does the author really exist? Does he/she write for well-known sites/newspapers? - What is the date?
When was the message written? Is the content current? Sometimes there is a new date to an old article and the title and content have been updated. - Who published the news?
A news medium? A person via social media? What audience does the author want to reach? - What are the sources?
Where does the information come from? From another news medium, organisation, interview or report? - Are the hyperlinks correct?
Articles refer to other websites, organisations or information. But are these real? And do they match what is claimed in the article? - What reason did the author have?
What is the author's intent? Is it advertising? Is it an opinion? Is it to make you laugh? - What are my preconceptions?
You often have a preference for someone who says or writes something. Your personal experiences or striking images can influence you. - How is the info presented?
Some alarm signals: edited images, spelling errors and a lot of capital letters and exclamation points are suspicious. - Why do I get to see this?
Online, you often get to see different news stories than your friends. That's the result of what you look up online, who your friends are and which preferences you have.
Activity:
Using the above tips, can you decide which of the following news stories are true, and which are fake? (Answers at the bottom of this section!)
- Uber to pay $9m in sex-assault report settlement
- Google Maps will soon suggest most eco-friendly route
- Lottery winner arrested for dumping $200,000 of manure on ex-boss’ lawn
- Religious Americans less likely to believe intelligent life exists on other planets
- Man Tries To Trade Kidnapped Baby For 15 Big Macs At Arkansas McDonald’s
- Swedes invent antifreeze for humans!
Did you correctly identify the fake news stories?
Answers: 1. TRUE, 2. TRUE, 3. FAKE, 4. TRUE, 5. FAKE, 6.FAKE.
How can I teach about mis- and disinformation in the classroom?
Taking time to work with your students and teach them skills to spot misleading/false content can be a great way to empower them to keep themselves and others safe online, but can also generate lots of interesting discussions that allow further learning.
Not sure where to start? This infographic from the FACTS4ALL project has some useful tips:
Her are some other tips that might be helpful:
- Start early! – Discussions around real and fake online content can start at a young age with learners as they start to explore the online world. Developing media literacy skills early can give children time to practise and refine those skills as they get older.
- Make it interesting and fun – Some fake news stories and AI-generated content is so ridiculous that it is easy to spot as fake, but these can be very engaging examples to help start a discussion with your students about how to spot fake content.
- Make it interactive – Using games or activities can be a great way to explore the nature of mis- and disinformation. The following games are mostly suitable for students aged 11 or above, but you should check the content and suitability before use:
- Learn to prebunk – Discussing issues in society, different perspectives, key facts about important topics, and the motives for why people mislead others can all help to prebunk; to build trust with your students about what is true/false as opposed to correcting the facts when they pick up false ones. Research has shown that the logic-based approach has far-reaching benefits. If you teach people to recognize tactics, they can spot them more often than individual claims. To learn more, check out this article from First Draft News.
Further information and resources
Want to learn more about supporting young people in developing their media literacy skills to recognise false and misleading information? These resources may be useful:
- Better Internet for Kids Resources – Educational resources from across the Insafe network of Safer Internet Centres. You can search for ‘misinformation’ or ‘media literacy’, for resources in your language and for resources for different age groups.
- CO:RE Evidence Base – A database of publications and research on youth online experiences. Searching the database with ‘misinformation’, ‘information disorder’ or ‘media literacy’ allows you to browse and read relevant research related to this issue.
- Facts4All: Schools tackling disinformation MOOC – As part of the Facts4All project, EUN and project partners created a Massive Open Online Course (MOOC) for teachers on how to tackle disinformation in school communities. All the MOOC content is available as an archived course. For more information on the project, please visit this page.
- School of Social Networks – This resource for primary-aged children, teachers and parents/carers provides information and advice on a range of online issues, including around evaluating what can or can’t be trusted online, such as fake news. There are accompanying activities that teachers can use in the classroom and parents can use at home.
- European Digital Media Observatory (EDMO) - EDMO is an independent observatory bringing together fact-checkers and academic researchers with expertise in the field of online disinformation, social media platforms, journalist driven media and media literacy practitioners. The site contains many useful articles that can help educators stay up to date with current disinformation trends and issues across Europe.