Information & Disinformation
Matthias C. Kettemann
- 1 1. Welcome to the Age of Information
- 2 2. Freedom of expression and its limits
- 3 3. Is someone taking action?
- 4 4. Is disinformation a real problem?
- 5 5. Should disinformation be illegal?
- 6 6. Should social media companies actively fight disinformation or just ignore it?
- 7 7. Should platforms (at least) mark disinformation as such?
- 8 8. Facts
1. Welcome to the Age of Information
Never before in the history of humankind have we had access to so much information. But the way we access this information has changed tremendously. The number of videos, books and stories we can see, read and listen to has exploded. We select what, when and on which device we see and read news and learn new information about the world and others in it. But the articles, songs, videos, broadcasts, and podcasts are not stored in public repositories – there is no global public online media provider. Rather, private actors influence according to their own standards and only loosely controlled by courts (actual courts and courts of public opinion) what people can see and find on their platforms. This can be a challenge because some private actors, and the algorithms they employ, do their best to keep us on their platforms, to keep us entertained. More time spent in their online spaces equals more potential views of ads which make them money. Therefore these companies prioritize engagement (including rage-fuelled engagement) more than the quality of content. This is changing, and many private companies have done a very convincing job of fighting bad content, but we are at an early stage. There is so much good content out there – but also news that disinform, disturb and disparage people. How can we manage this challenge – share illegal and harmful content less, push legal and trustworthy content more – in a human rights-sensitive way?
2. Freedom of expression and its limits
Freedom of expression online is essential to human progress
Freedom of expression, which is guaranteed both in national constitutions and in all universal and regional human rights treaties, constitutes one of the essential foundations of society and one of the basic conditions for progress and for each individual’s self-fulfillment. It is important to note that not only information or ideas are protected that everyone agrees with, that are favourably received or considered inoffensive, but also ideas that may offend, shock or disturb. Sometimes these need special protection. Ideas change over time and the companies that own communication spaces may be inclined to delete too much. Freedom of expression by itself is a key enabling right: offline just as online, since, for many people, the Internet has become one of the principal means by which they receive and impart information and ideas. User-generated activity allows for an unprecedented exchange of ideas, instantaneously and (often) inexpensively, and regardless of frontiers. Of course, many people still do not have good and safe, or any, access to the Internet, but the international community is committed – including through the UN’s Sustainable Development Goals process - to ensuring that everyone can access the internet and contribute meaningfully to the wealth of online information.
The consequences of disinformation
“Fake news” are pieces of news that are based on or communicate empirically wrong information with the goal of disinforming readers. The notion is very problematic, as politicians have started to use it indiscriminately to refer to news they dislike. It is better to speak of disinformation. But not all disinformation is false. Some disinformation contains only or mostly correct information, but the information is used selectively in an attempt to shift the frame of reference of the debate and influence the political narrative selectively. Most satire, however, is not conscious disinformation, as authors usually wish to entertain, rather than to influence, satire can also be used to shift political discourses in a negative way. Overall, it is better to speak of disinformation, when speaking of conscious attempts to spread false news or narratives. Misinformation, by contrast, means stories that are wrong but are not consciously communicated to disinform readers. If an article states that a burglary took place in the airport, rather than at the train station, where it actually happened, that is misinformation. If a story says that asylum seekers committed the crime when the criminal was actually a citizen and the story’s authors have the goal to paint asylum seekers in a bad light, then this is disinformation.
But why is digital disinformation particularly dangerous?
- it can easily be shared (it can even go viral, that is: be shared exponentially, quickly and widely);
- communication is instantaneous;
- large parts of the population can be reached at minimal costs to the purveyor of disinformation (it is thus an example of the power of asymmetric information activities);
- humans are especially bad at recognizing false from real news in the absence of verbal cues; further
- human biases and cognitive deficiencies make us susceptible to online content that plays to our fears and seems to convince us that we were right all along (“See, I knew that the government was hiding something …”, “See, I knew that rich people can bend the rules …”.)
- the anonymity and lack of face-to-face interaction reduces the willingness to engage in cooperative behaviour and increases aggressive behavior and speech
This becomes very dangerous. Think of digital disinformation the day before an election takes place that suggests a politician took bribes or think of the potential for harm when all members of a particular ethnic group are branded as criminals in a situation where tensions are already high.
3. Is someone taking action?
Truth should reign free. But it needs some help. As the global public had to learn, online social media services were used by bad actors to spread disinformation. From the US to Brazil, from Indonesia and Mexico to Kenya, disinformation brokers (some who have been identified, some who remain anonymous) have tried to sway the public opinion and to attack critics. Yes, ideas should ideally fend for themselves. Bad ideas that do not move the world in the right direction should ideally just fall out of favour. But in times of strategic disinformation campaigns, the disillusionment of parts of the populace with traditional ideas about politics, the growing influence of masterful manipulators in politics, and the rise of nationalist movements across the world, this is not enough. The truth needs help, especially as strategic lies are often connected to (or lead to) human rights violations, often on a massive scale.
What governments are doing
States have the primary responsibility and ultimate obligation to protect human rights and fundamental freedoms, including in the digital environment. Coupled with the obligation to respect their commitments under international human rights law, they must introduce regulatory frameworks, including self- or co-regulatory approaches, which allow for differentiated treatment of expressions online, including effective and appropriate remedies. Many states, including Germany, France, and Russia have introduced laws fighting certain kinds of (illegal) disinformation, e.g. during certain periods such as elections. But it is thus not enough for states not to interfere with freedom of expression, including disinformation that does not amount to internationally prohibited expressions, such as calls for genocide, qualified hate speech, and incitement to terrorism. States also have a positive obligation to protect human rights and to create a safe and enabling environment which allows all persons, as a recommendation by the Council of Europe, an organization charged with ensuring the respect for the rule of law in Europe, puts it to “participate in public debate and to express opinions and ideas without fear, including those that offend, shock or disturb state officials or any sector of the population“. States are, of course, not the only actors in ensuring human rights online. Internet intermediaries, too, have duties under international and national law. In line with a document the United Nations developed to clarify the role of companies as human rights actors, the “UN Guiding Principles on Business and Human Rights”, Internet companies should respect the human rights of their users and affected parties in all their actions.
The international community has also started to fight disinformation
Internationally, states have recognized that sharing false or distorted news can be a serious problem. The United Nations has convened two working groups to look at the issue and to develop norms that make the Internet more secure and trustworthy. The European Union is also engaged in similar efforts, especially in order to safeguard the integrity of elections and to fight terrorist propaganda online.
4. Is disinformation a real problem?
The core of the controversy: Studies on disinformation come to diverging results. On the one hand, the concrete impact of disinformation is hard to pinpoint, though not non-existent. On the other hand, disinformation actors seem to take up a disproportionate part of the news about online media practices. This reinforces the impression that the Internet can be used as an effective forum for disinformation. Others argue that the main threat of disinformation is making everyone argue more and trust each other (and reliable sources of news) less. The more ‘news’ there is, the less time journalists have to assess and contextualize it, and media consumers have to digest and understand it. Is disinformation a threat to social order? Because of our biases, and in the absence of verbal cues, we are more susceptible to widely shared misinformation. Additionally, bad actors have started to use the Internet as a vehicle for the strategic deployment of disinformation, which we are susceptible to because traditional concepts of trust need to be rethought for online settings: Humans often assess the value of information based on the trust they place in the speaker, not only the message. Do people trust disinformation? We (usually) trust doctors because they are doctors and because in order to open a practice and/or work at a clinic they need proof of formal education. We (usually) trust printed quality newspapers, such as the New York Times, because they have a reputation for seeking the truth. We compensate for our lack of knowledge by finding out more about people and institutions involved. On the Internet, this does not work well. On the one hand, trust in traditional institutions – doctors, newspapers – slowly declines and people start searching for alternative news and opinions online. On the other hand, the diversity of news and opinions online makes it easy for readers to ‘escape’ from news that do not fit their opinion and search for news that makes them feel safe and understood by reaffirming their own biases (even and especially when they hold fringe or minority views). Because of human biases, they then start to believe information that echoes their existing opinions. Authors of disinformation use this to reinforce mistrust and guide readers away from facts. It is easy to be manipulated. We all have cognitive vulnerabilities, such as the tendency to misjudge probabilities, look for simple solutions, side with the perceived majority. Do Internet users live in filter bubbles? While researchers suggest that certain Internet users, notably those belonging to extremist groups, or those with very niche interests tend to communicate primarily, or even solely, within their in-group, filter bubbles are an overhyped phenomenon. Personalization has barely any impact on search results with major search engines. It is not (only) the fault of social networks that users often engage with content they feel they agree with. However, we have to be cautious of concerted campaigns to discredit certain news sources as a whole. Or are we consuming news content from many different sources? Even though social media play an important role in many people's lives, they are used for a broad variety of purposes, with media and news consumption being just one of them. The vast majority of Internet users across all age groups regularly looks at news outside social networks and off the Internet. Only a very small minority of social media users limit their news consumption to social media platforms. This makes disinformation on social platforms a smaller danger than is currently being argued across mainstream media and by policy-makers.
5. Should disinformation be illegal?
The core of the controversy: Internet users engage disproportionately with sensationalist content. Ask yourself: Would you click on a headline that seems to bait you and promise you that there are 10 reasons why you should click on it, with the 9th reasons promising to shock you? Players on the disinformation market use human biases and cognitive deficiencies to bait online users into consuming disinformation. Does this mean that states should outlaw disinformation and criminalize purveyors of disinformation? How is disinformation regulated? Online speech is regulated through a complex system of rules: community standards, national laws, regional norms, and international law. As of now, disinformation may be individually annoying and societally dangerous in aggregate, but much of what counts as disinformation is not illegal. First, we need to distinguish between statements about people and general disinformation. Usually, publishing wrong information about a specific group is not actually punishable – with some narrow exceptions in certain countries because of historical, religious or cultural-traditional reasons (e.g. holocaust denial in Germany). Is disinformation about people different than disinformation about things? Disinformation about people can be prohibited if it contains wrong statements of fact and not merely opinions. The criminal offenses of insult, slander, and libel can be invoked. When journalists deliberately publish false news in a press organ, they may violate journalistic guidelines and a press code of honour. Can freedom of expression be limited? Some disinformation is already illegal. If new rules limiting the right to spread untruths are considered necessary, the limits on restrictions to freedom of expression needs to be considered. Human rights law usually prescribes that restrictions must be provided by law and be necessary for certain legitimate goals, such as respect of the rights or reputations of others, for the protection of national security or public order,public health or morals. Restrictions must thus be legal, necessary, legitimate and proportionate. Would making disinformation illegal be a problem under human rights law? Freedom of expression constitutes one of the essential foundations of society. Under most major human rights systems even opinions that can offend, shock and disturb are protected (which is logical, as mainstream opinions usually do not need protection quite so urgently). The Internet is one of the key places and principal means for people to express themselves. Therefore any limitation of disinformation would have to be measured against human rights standards. Already now, freedom of expression does not extend to statements and representations prohibited under international law. These include calls for genocide, terrorism, serious discrimination in the form of violent hate speech and sexual exploitation of minors. Here intermediaries are even obliged to become active on their own. But how accurate can they be, especially in light of time constraints? What are the appeals mechanisms in place? The role of intermediaries becomes particularly important when state actors refuse to enforce human rights online or active abuse laws designed to address disinformation and suppress dissent.
The core of the controversy: There will always be disinformation and a lot of disinformation is protected by freedom of expression. But does it hurt anyone if disinformation is left online? Does this negatively impact the trust people may place on the Internet? Fundamentally, companies can decide what content to delete, even if it is legal, under their community standards or terms of service. This is qualified somewhat when it comes to legal content on huge social networks. Indeed, a few big and influential social media platforms have become ‘de facto’ public discourse spheres. Being excluded from a social media presence where you have access to a billion or more people has a hugely different impact than being ‘censored’ by a smaller outfit. Any deletions under the terms of service should therefore be taken while respecting certain procedural standards and on the basis of a weighing exercise between the harm caused by the speaker and their own freedom of expression. The question remains: Should companies (be allowed) to continue to cary disinformation, especially if it is widely shared and drives traffic to their sites and can thus be monetized? Or is it better – and more responsible – to quickly delete content that is untrue or detrimental for social cohesion and to clearly take a step again ‘coordinated inauthentic behaviour, as big social media companies have successfully started to do? Why delete content that is perfectly legal? People can use their own opinion to judge the truthfulness of content. If content is deleted, the impression may emerge that a platform is politically biased. Being excluded from a quasi-public space can lead to a further rise in alternative media sources and increase the mistrust against more mainstream platforms. While platforms can delete content under their terms of service, there is often a tension between these and national laws, that has to be solved. Some content is illegal is certain countries, and legal in others. Then platforms have to employ geo-blocking to ensure they respect local laws. How can we prove that content is wrong? It is very difficult to prove that content a certain piece of content is wrong and even it is is correct, it may be societally dangerous if it shifts the frame of reference of contains only highly selective “truths”. But generally, opinions deserve to be heard, even if they disturb. More generally, trusting companies may not be enough. Companies’ expression-related policies can exhibit certain biases, even if they are not intentional but rather linked to the organizational set-up (such as ignorance of certain minority languages or traditions). Additionally, from a purely economic viewpoint, news that drives traffic to a site may be good (at least in the short term). However, most platforms have started to take a long term view and have listed research that informs them that inauthentic behaviour leads to less trust in online communication spheres and may, in the long run, reduce the willingness to interact through the Internet. What about societal consequences of people believing untruths? Studies tell us that readers will believe misinformation and disinformation, especially if it confirms pre-existing biases and beliefs. Therefore, every single piece of disinformation can confirm someone’s wrong idea about the state of the world. Leaving disinformation online also allows network effects to set in: content may be shared quickly and globally. Does more information online lead to less trust? Studies also show that there is a consistent relationship between exposure to disinformation (or perceived misinformation) and a decline in trust in news sources, across all types of media. In that sense every piece of disinformation poisons the well. Further, companies progressively see themselves as responsible corporate actors. They have a stake in the social cohesion of the states they operate in. Fighting disinformation, as many already do very effectively, is part of their duty as responsible corporate actors.
7. Should platforms (at least) mark disinformation as such?
The core of the controversy: Citizens who are social media users must be empowered to answer questions such as "Which sources of information can I trust?", "How do I assess and verify content?", "Do I know what agenda the author may promote?” and companies should assist them in finding answers. One social network service has started to build a library of political ads that allows users to see what organization paid for an ad, to whom it is targeted and what other ads the one they see is associated with. This helps with assessing the reliability of the information conveyed. But is it a good idea to have platforms assess the correctness of content and the reliability of its authors and mark content as such? Does this lead to more trust – or do users feel disenfranchised and even less trustful of the platform? Would it help readers to assess content better if problematic content was marked as such? Deleting problematic content is not the best solution. Why not cooperate with fact-checkers and provide notices to content which may be problematic? Then readers can immediately see if a piece of news is actually fake, and what level of trust the author of a post or video deserves. Users can be invited to contribute to this fact-checking exercise. Should social media companies use algorithms to make misinformation harder to find? Social media companies can then use algorithms to downgrade misinformation and to disincentivize sharing. If users choose to share a problematic piece of content, they are immediately put on notice. Links can lead users to a correction, such as a trusted study on the subject or an article from an established news organization. But what does being ‘told’ certain information is untrue make people think? It does not make sense empirically to show that a certain piece of content should be mistrusted. Users may consider these types of notices as an attempt to influence them and become angry, a phenomenon psychologists call ‘reactance’. If people feel that their personal liberties are infringed upon and an authority figure tells them not to act in a certain way or not to believe what they are reading, they may react the opposite way, be sharing the disinformation more. This ‘boomerang’ effect must be avoided. On the other hand, if a notice is only small or neutrally formulated (e.g. “For more on this, see …”), users might not even realize that the information they read is rectified by the content of the notice.
What people do online can be regulated Every state has an obligation to protect the rights of people under their jurisdiction and control. Therefore, online behaviour not only can be regulated, some behaviour - especially if it can be detrimental when scaled up - must be regulated in order to ensure a stable and safe online environment. Laws and regulations are and must be enforced. We should, however, not fall into the trap of believing, that is only laws enforced by states are the solution to disinformation. Rather, states and companies must both, in their respective roles, develop laws and norms to prevent harms to Internet users and citizen-customers. International treaties, including those ensuring freedom of expression, apply online There is no international treaty dedicated to the Internet, but international law fully applies to Internet-based information and communication flows and the infrastructure supporting them. Freedom of expression, independent of the medium you choose and the land you live in, is protected by treaty law and customary international law. Further, general principles of international law, like the principle of non-intervention and due diligence, normatively frame what states can do – and must do – online. A state, for instance, that allows someone from their territory to publish especially harmful disinformation that amounts to e.g. incitement to genocide may be internationally responsible. Automated suggestions what to see and read are not always neutral These “suggestion machines” help platforms manage how news is presented to users. Implemented through algorithms these are, however, neither natural nor neutral. They are designed by humans and used by companies with a certain goal. If online platforms prime these algorithms on maximizing engagement (that is: if the goal of an algorithmic selection of news is primarily to keep users engaged and on the site), then the quality content of news might not be prioritized. Of course, users have within their media repertoires other sources of information, but – societally – we should start engaging in a discussion on how algorithmic decisions-making influences the way we perceive online content. Social media is not an accurate mirror of society If we read online media, we may come to think that what we consume adequately reflects the society as a whole. However, studies have shown that social media content paints a distorted picture of society. Only very few highly active media users produce most Internet content. Especially media producers that seek to strategically disinform are very active. This is why there is the perception that online conspiracy theories are broadly accepted (they are not) or that everyone is beautiful, eats aesthetically pleasing food and is constantly on holiday (they are not). At the same time, the number of likes, shares, followers or comments is not a good metric for popularity. These numbers measure engagement and can easily be manipulated by e.g. buying followers.