Fake news, echo chambers and filter bubbles: what you need to know

In each edition of the BIK bulletin, we look at a topical issue – our latest edition focuses on fake news. Fake news, echo chambers and filter bubbles are hot topics right now. Are they the next generation of online-related challenges? Are they old foes wearing new clothes? Or are they something else? Martina Chapman, an independent specialist in media literacy, considers the role that critical media literacy, supported by cross-sector collaboration and coordination, may have in countering these issues. Read on to find out more (read the full June 2017 edition of the BIK bulletin here).

Date 2017-06-29 Author Martina Chapman
picture

In many ways, networked technology has facilitated the democratisation of the media and opened up the public sphere as we have moved from a small number of general-appeal broadcasters delivering to the many, to almost everyone broadcasting all kinds of niche content to everyone else.

This disruption of the traditional structures supporting democracy has not been without its challenges. Right now, there's a particular focus on issues such as fake news, filter bubbles and echo chambers, and there will be other challenges in the future as the internet continues to evolve and develop in ways that we probably can't imagine right now. In order to see how the development of critical media literacy could help in the fight to bring about a better internet, it might be useful to look at some of the underlying factors that make us susceptible to these issues.

Mistrust, misinformation and manipulation

In April 2017, BBC Future Now asked a panel of experts about the 50 grand challenges for the 21st century. In the section called "Future of the Internet, Media and Democracy", many of the experts described the difficulties of knowing what's true and accurate online as one of key challenges we face today.

We live in a fragmented media environment where trust in conventional media appears to be eroding, sometimes with good cause. As people lose trust in "mainstream" media they are more likely to seek views from alternative sources, many of which are not subject to the same regulatory rules and guidelines as more traditional media – so news sources are not really playing on a level field.

People are increasingly using social media to access news, and often rely on items circulated by friends, or friends of friends, rather than robust sources. A 2016 Pew Research Center study showed that a majority of US adults – 62 per cent – get news on social media, and 18 per cent did so often. In this kind of environment it is becoming more difficult to identify sources of information, let alone evaluate them for truth and accuracy.

Talking about the 50 grand challenges for the 21st century, Kevin Kelly, technology author and co-founder of Wired magazine, notes that "Truth is no longer dictated by authorities, but is networked by peers. For every fact there is a counterfact. All those counterfacts and facts look identical online, which is confusing to most people."

When confusion and mistrust in established sources sets in, the door is left open for misinformation to take hold. Right now, we are in a place where all kinds of dubious and sometimes dangerous content spreads and takes hold and often does so away from the gaze of traditional forms of scrutiny.

A lot of online content is not subject to any effective editorial process with the result that misinformation, even when flagged as being inaccurate, can spread like wildfire, fanned by intended (or perhaps even unintended) "endorsement" by those who share, like and comment on it. A recent study from Ofcom showed that three in ten people who share links to articles on Facebook or Twitter say they often do this without fully reading the content first.

Another Pew Research Center study, conducted in December 2016, found that 64 per cent of American adults said made-up news stories were causing confusion about the basic facts of current issues and events.

In another BBC Future Now article, on lies, propaganda and fake news, Stephan Lewandowsky, a cognitive scientist at the University of Bristol in the UK, comments that "Democracy relies on people being informed about the issues so they can have a debate and make a decision." He describes having a large number of people in a society who are misinformed and have their own set of facts as "absolutely devastating and extremely difficult to cope with."

A notable example of this comes from the UK during the 2016 EU referendum where the Vote Leave campaign claimed that Britain was paying £350 million a week to the EU – a claim which was repeated by various high profile politicians despite being repeatedly debunked by the UK Statistics Authority. In addition, the Vote Leave campaign claimed that this money would be diverted into the National Health Service – a claim that was literally "writ large" on the side of a Vote Leave campaign bus and widely circulated on social media. The morning of the referendum result, in favour of Leave, that particular claim was promptly disowned.

This so-called "post-truth" era, where people (often politicians or people of influence) make false claims to influence public debate with little or no negative consequences to themselves, helps to feed the mistrust of messages and drive the flow of misinformation.

Another contributing factor is the fact that a lot of people don't really understand how the media, and especially digital media, operates or how it is funded. Recent Ofcom research shows that significantly fewer people know how the BBC website and BBC iPlayer are funded compared to BBC TV. Just over half of people know how search engines are funded and less than half of adults are aware that the main source of funding for YouTube is advertising. If you don't know who is paying for the content you are seeing, how can you properly assess the motivations of the producers and distributors of the content, or the accuracy of it?

Looking specifically at the issue of fake news, if we take the term to mean the deliberate presentation or creation of false information with the purpose of misleading, it's fair to say that this is not a new concept: it's actually been around for a very long time, often in the form of propaganda or similar. However, the term has really come to the fore in the last year or so, thanks in part to the US presidential campaign. The outgoing US President, Barack Obama, denounced the spate of misinformation across social media platforms following the elections, and it is certainly no secret that President Trump has an apparent disregard, distrust and disrespect of mainstream media and frequently cries fake news.

There is an economic aspect to the fake news phenomenon we need to consider. In the pre-Trump era, fake news was often referred to as "clickbait" – those headlines that are just so irresistible or unbelievable that we have to click on them, often to be disappointed by the poor content: "10 amazing cat videos", "These celebrities were gorgeous once, just look at them now", and so on. Again, this is not a new digitally-triggered phenomomen. Look at the long history of tabloids, or the shelves of "celebrity gossip" style magazines for sale in most newsagents, many have successfully used similar techniques in print form for many years. Although we may not buy the paper or the magazine, we might often glance at the sensationalist front pages lining the newsagents' shelves, even when we know that they are probably not entirely true.

What is different now is that the impact of each digital "glance" at "fake news" or "clickbait" is amplified in the online world because a lot of online content is valued by the volume of traffic visiting the site, with revenue generated through advertising and/or the selling of user data. As Simeon Yates, Director: Institute of Cultural Capital, at the University of Liverpool suggests, "the economics of social media favour gossip, novelty, speed and ‘shareability'."

And content that triggers an emotional reaction is more likely to be engaged with. NewsWhip, a social media monitoring company, recently published a report on the rise of hyper-political publishers online in which they found that the publishers of hyper-partisan pages were "highly adept at provoking their followers into selecting a strong emotion rather than just a like." They also noted that the most popular of these emotions was the angry reaction.

Two pie charts

Source: NewsWhip, The Rise of Hyper-Political Publishers

So, sensationalist content, or content likely to provoke an emotional reaction, is more likely to be engaged with and therefore clicked on and potentially shared. More clicks equal more shares which in turn create more clicks which can generate more user data and more revenue from advertisers. As long as people can make easy money by driving large volumes of people to particular sites, there will be content created (whether accurate or not) that exploits particular sensibilities. But why are social networks such an important part of the dissemination network for fake news?

Many social networks and search platforms are based on algorithms which curate content for use based on our previous behaviour and consumption patterns. The idea is that algorithms make it easier for us to find the content that will interest us – and it works to a large degree. But do these algorithms create "filter bubbles" which actively limit the variety of opinions we're exposed to, as suggested by Eli Pariser, CEO of viral content site Upworthy, and could this potentially skew our world views in ways that are not conducive to recognising or understanding "the other"?

Ofcom's Adults' media use and attitudes study reports that the majority of adults with a social media site profile/account say they see views they disagree with on social media. Research from Philip Seargeant and Caroline Tagg from the Open University shows that the actions of users themselves are still a very important element in the way that Facebook gets used. They found that most users prefer not to use Facebook for political debate and prefer "to keep interactions trivial and light-hearted". When confronted with views that they were opposed to, people report ignoring or blocking posts, or even unfriending people, to avoid conflict – effectively self-censoring and creating tailored news streams by hiding opinions they disagree with.

So, even if we eliminated algorithms (which is very unlikely), people may still choose to create their own filter bubbles filled with people who, in general, are likely to have a similar world view, with similar values and sense of humour – an ideal environment for memes, tagging and trolling to help make a piece of content "go viral" and, as anyone who works in advertising knows, word-of-mouth is a very powerful marketing tool.

If money is one motivation for the growth of fake news, then manipulation is another. Data analytics and social psychology are a powerful combination which can result in very persuasive, and highly personalised, micro-messages being delivered to unsuspecting targets.

In recent months, it has been suggested that a sophisticated psychometric template based on data analytics and military technology was used to target swing voters in both the UK's EU referendum and the US presidential election.

In recent months, reports have been emerging (such as those from Observer journalist, Carole Cadwalladr) about the extent to which data analytics may be used to influence and persuade people in a highly-targeted manner, with both the Brexit referendum in the UK and the presidential election in the US being particular examples once more. Psychometrics, sometimes also called psychographics, focuses on measuring psychological traits, such as personality biases, and the data or digital footprints that we leave as we meander around digital media and, in particular, social media can provide invaluable insights for marketing – or electoral – campaigns.

So if this micro-targeting of messages, based on the data trails that we leave on social media, has the potential to influence how people might vote, what else could it be used for? And if we, as internet users, don't get smart about this, are we leaving the door open to other areas of our lives being influenced in ways we might not be aware of or understand?

As Will Moy, Director of Full Fact, an independent fact-checking organisation based in the UK, said in relation to the 50 grand challenges for the 21st century "We shouldn't think of social media as just peer-to-peer communication, it is also the most powerful advertising platform there has ever been."

However, it's not just social media that has the power to influence us. Research from Ofcom shows that less than half of search engine users in the UK are able to correctly identify adverts or sponsored links in the results page of a search engine – even when they are identified by the word "ad". And, when it comes to judging the accuracy of information contained on sites that are listed on the results pages of search engines, almost one in five users believes that any results returned by a search engine will contain accurate and unbiased information. This, on its own, is an uncomfortable statistic. However, the same research showed that search engines are by far the most popular source when looking for information online. That's a lot of people handing over editorial responsibility to commercially-driven algorithms.

Getting the facts right

There is no doubt that in the last nine months there has been a definite increase in the range and number of initiatives aimed at raising awareness of how content, and specifically misinformation, can be used to manipulate and persuade.

Peter Barron, Vice President of Communications for Europe, Middle East and Asia at Google commented recently "Judging which pages on the web best answer a query is a challenging problem and we don't always get it right". As a result, Google has committed to improving their algorithms so that accuracy is taken into account in search results.

As a counter to the "post-truth narratives", misinformation, fake news and plain old lies, we have seen a range of fact-checking services emerging. Faktabaari in Finland has been operating for a number of years now and provides an academic fact check of statistics and statements used in political debates. Similar models have been adopted by many broadcasters such as the BBC's Reality Check, Channel 4's FactCheck and CNN's Fact Check. Similarly, Snopes.com has been around for a while and Full Fact, a UK-based independent factchecking charity, targets media stories that stray from the truth.

In France, CrossCheck was devised and developed by First Draft and Google News Lab in consultation with newsroom partners across France. The project is a bilingual fact-checking collaboration of news, education and technology partners involving 37 newsroom partners checking and cross-checking audience questions. It operated for 10 weeks in the run-up to the French presidential election. However, those behind CrossCheck are now trying to establish whether having multiple newsrooms — and multiple logos appended to each fact check — succeeded in increasing readers' trust in the content, or whether it had the unintended consequence of appearing as "mainstream media" looking out for one another.

News logos from French CrossCheck service

Source: CrossCheck website

When to debunk a story?

A key question remains around when to debunk a story: do it too early and you risk promoting it; leave it too late and the damage is done. As Paul Resnick, University of Michigan points out in the BBC Future Now article on lies, propaganda and fake news, "The problem is that corrections do not spread very well."

While fact-checking services are important – and very welcome and helpful in terms of holding people such as politicians and the media to account – they can only be part of the solution, along with initiatives like flagging suspicious content or cross-referencing sources.

While questions remain about whether people who are consuming media will have the time, the inclination or the skills to cross-check stories that they are reading, perhaps the real challenge with fact checking is that people may simply continue to choose to read/see information that supports their own world views. As Victoria Rubin, Director of the Language and Information Technology Research Lab at Western University, Ontario, points out in her contribution to the 50 grand challenges for the 21st century "human psychology is the main obstacle, unwillingness to bend one's mind around facts that don't agree with one's own viewpoint." Again, this is not a new phenomenon.

Managing social media

A quarter of the world's population now use Facebook. Many of these users are probably unaware that Facebook has become a key political campaigning tool.

In April 2017, Facebook published a white paper outlining how the platform was used by nations and other organisations to spread misleading information and, as a result, how they have expanded their security focus "to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people."

As a result, Facebook pledged to monitor attempts to manipulate the platform, identify fake accounts, educate at-risk people about data safety, and support civil society programmes around media literacy. Facebook also committed to reducing sensationalist headlines and stories in its news feeds by looking at whether people have read content before sharing it and updated its advertising policies to reduce spam sites that profit off fake stories, and added tools to let users flag fake articles.

Recently, against a background of increasing political pressure in the run-up to elections across Europe, Facebook deleted "tens of thousands" of fake accounts and published ads in newspapers telling people how to spot "false news". While this initiative is welcome, there were some doubts as to its effectiveness. In the UK, at least, these ads were only taken out in the broadsheets and not the red-top tabloids, with Campaign website describing the ads as an "aesthetically bland and painfully uninformative display ad".

In April, Google launched a series of day-long workshops for 13 to 18 year olds in the UK, as part of the global YouTube Creators for Change programme which supports creators who are tackling social issues and promoting awareness, tolerance and empathy on their YouTube channels. The announcement of this initiative followed some controversy over YouTube's restricted mode, which some users claimed censored videos with LGBTQ+ (Lesbian, Gay, Bisexual, Trans, Queer/Questioning, and others) content, and the spread of misinformation online.

Media policy and regulation

Regulatory frameworks across Europe cover news and current affairs, and it is important that regulators remain strong, independent and impartial for the media that fall within their remit. But not all media – or media technology – falls within current regulatory boundaries and media policy and regulation can struggle to keep pace with the speed of the changes we are seeing in the media landscape.

Social media, for the most part, bypasses the conventional gatekeepers and regulators making it easier for lies and misinformation to be circulated, both knowingly and unknowingly.

Even with the conventionally regulated media, the impartiality which is required of broadcasters can place a significant responsibility on audiences to make their own judgement about the relative weight to give to opposing viewpoints. But how well do audiences recognise the right or left leaning of some media outlets, their partiality and self-interest, or how this might colour the coverage? And, indeed, do they even care?

There have been calls from some quarters to employ technological solutions such as a labelling system or a blacklisting of known fake news sites. However, as danah boyd suggests, "It's going to require a cultural change about how we make sense of information, whom we trust, and how we understand our own role in grappling with information. Quick and easy solutions may make the controversy go away, but they won't address the underlying problems."

The role of critical digital literacy

As Howard Rheingold argues in his book Netsmart: How to thrive online, people need critical media literacy to be "able to exert more control over their own fates than those who lack this knowledge."

We need to be able to trust traditional media to provide high-quality, well-researched, substantiated content, and to make it clear what is fact, what is opinion and what is commercial content (like native advertising). And, as Charlie Beckett, Polis Director and LSE Professor, has argued, we need professional journalists to be more sceptical, more challenging and more transparent in their use of evidence, and to step up to their responsibility to "speak truth to power", and that "power" will include social media and those involved in data analytics.

Critical media literacy can help us understand how and why content is created and better appreciate the craft of news reporting and how to spot and call out bogus content, rather than simply sharing it because the headline resonates with us in some way.

Our personal data is the currency on which the internet operates, and we need to value it, protect it and manage it as we would any other currency – that includes knowing what is being collected, by whom and to want end. Critical media literacy is required to understand how the economics of the internet work and how user profiling based on data analysis allows micro-targeting of content.

However, critical media literacy is a dynamic concept that evolves in response to technological, social, cultural and political changes, and it would be a mistake to see it as some kind of magic bullet. People form opinions and beliefs for complex reasons and better equipping citizens with the cognitive skills to analyse the content and the context does not mean they will do it every time or that cognitive reason will win over moral and socio-emotional factors. So helping people develop better critical media literacy will not be a panacea for all digital ills, but it should be the first defence.

Maha Bali captures this sentiment well when she suggests that our approach to dealing with fake news should also be about "nurturing cross-cultural learning attitudes and skills that help make our knee-jerk reactions to news in general more socially just and empathetic."

Where do we go from here?

If we really do want to deploy critical media literacy as a defence against the risks around digital media, we need to commit to it. We need to fund it. We need to coordinate it so that valuable research, resources and experiences are shared and re-used. We need to develop projects that are scalable but also allow for the needs of individual countries and communities to be met.

This won't be straight-forward. From a policy perspective, the vast scope of media literacy means that it is often, in effect, an orphan policy area with no clear, long-term owner. This is true not only at a European level but also at national and regional levels.

However, the Safer Internet Centre and Safer Internet Day models, delivered by Insafe and INHOPE under the European Commission's Strategy for a Better Internet for Children, do provide a model for inspiration as they provide sustained support for children, parents and schools. A decade ago, we were focussed on trying to "protect" children from the internet. Today, the focus has evolved to building resilience and empowering children with the skills, the knowledge and the support that will help them navigate their digital journeys as safely as possible – and critical media literacy underpins this.

Developing critical media literacy is a life-long learning journey that will often involve changing well-established behaviours (for example, how we find and share content online). That's why we need a cross-sector approach that will provide support to people at different stages of their learning journey and that will be agile enough to respond quickly to issues as they emerge.

Traditionally, media literacy was seen as the responsibility of the education and media sectors, but this has changed dramatically and, now, one of the sectors playing the biggest role in the promotion of media literacy is civil society. But more work needs to be done to engage a broader range of sectors and players, especially the technology sector and online/social media platforms.

A number of European countries and regions (such as Brussels, the Netherlands and more recently Ireland) have already acknowledged this potential and have invested in developing broad networks of organisations for the cross-sector promotion of media literacy. This approach helps stakeholders recognise where they fit in, and how (and with whom) they might develop projects that will contribute to their own strategic objectives while at the same time contributing to the overall media literacy of their community and society.

In conclusion

We are currently living in a world where mistrust and misinformation are creating the perfect environment for the proliferation of fake news, motivated by a combination of money and the desire to manipulate attitudes, opinions and actions. Digital media provides the raw material (data) and the infrastructures (social media), while data analytics is evolving into a more precise targeting mechanism than we have ever seen before.

Until people really understand how the media works, how it is funded (from traditional models to search engines), and understand how data is collected and used, they will be unable to make an informed choice about the content that they consume – and that is as relevant when you are on a shopping site, as when you are on a news site or a social media site.

From Brexit to the US presidential election, we have seen how fear and blind optimism are used as weapons to out-gun facts and figures. Reasoned debate loses out to angry and easily tweetable slogans. Media literacy and, in particular, critical media literacy, is the key to empowering people with the skills and confidence to interrogate the accuracy of information and to challenge unfair and inaccurate representation, extremist views and, ultimately, to make better informed choices.

And we all have a role to play in making that happen.

Read the full June 2017 edition of the BIK bulletin to discover a whole range of resources from European Safer Internet Centres, perspectives from youth, and policy initiatives on tackling fake news.

The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of the Better Internet for Kids (BIK) portal, European Schoolnet, the European Commission or any related organisations or parties.

About the author:

Martina Chapman is an independent consultant specialising in the areas of media literacy and digital engagement. In 2013, she established Mercury Insights Ltd to provide advice and support to a range of cross-sector organisations. Her specific areas of expertise include media policy and strategy development, media research and content creation.

Recent activities have included working with the Broadcasting Authority of Ireland to develop their Media Literacy Policy and network, and working with the European Audiovisual Observatory on a major media literacy mapping project across 28 EU member states for the European Commission.

Martina has also been a regular contributor to the research department of Ofcom, the UK communications regulator, for whom she managed the 2017, 2015 and 2014 Adult Media Use and Attitudes reports, as well as the 2014 and the 2013 Children and Parents: Media Use and Attitudes report. She also co-authored the 2014 Report on Internet Safety Measures.

She is a Eurovision Academy Faculty Member for the European Broadcasting Union and regular contributor to national and international conferences and events on media literacy.

Related news