No categories assigned

Materials on “Next Generation Internet Governance”

Core Questions

(1) Overview

Technology is neither good nor bad. It is what we make of it. This holds true particularly for what is equally a profoundly human technology, and arguably the most transformative technology in the history of mankind. As digital technologies have become in many respects operating systems of our societies and economies, the opportunities afforded to us have exceeded or wildest dreams and aspirations. And so have its unintended consequences and side-effects.

Once, in the public imagination, the internet was “the” infrastructure for empowerment and collective action, for creativity and collaboration, a democratizer of means of digital production, giving a global voice to everybody, the engine of economic progress and wealth-creation, a place for learning and self-development. Make no mistake. It still is. For millions of people world-wide the WWW is still exactly that: It has given people access to services formerly unavailable to them, afforded the marginalized voice and participation in political affairs and enabled them to do things they could only dream of beforehand.

However, misuse, manipulation, and echo chambers, the militarization of cyberspace, digital trade wars and massive violation of human rights are also a reality of our time. Efforts to exert digital control have spread far beyond authoritarian regimes. In many “liberal” societies, it comes in the disguise of digital nationalism or "network sovereignty", with national policy-makers unilaterally legislating content, privacy, and copyrights.

The future of the “World Wide Web” is at stake. Now it is time to reimagine #ourinternet and reflect and deliberate on how we can build an internet that works for everyone.

If we want to shape the next social and technological evolution of the internet, rather than being shaped by it we need to reform existing policy frameworks. This is what #NextGenerationInternetGovernance (NextGenIG) is about.

The IGF presents an exclusive opportunity to jumpstart the development of this future-proof, holistic and resilient governance framework for #ourinternet.

This process towards a future-proof, holistic and resilient governance framework for #ourinternet should start with the IGF 2019. Further milestones until 2030 will be, inter alia, the 75th Anniversary of the foundation of the United Nations (October 2020), the UN World Summit on the Information Society (WSIS+20) in 2025 and the “goal year” of the UN SDG Agenda in 2030.


(2) Fake news, misinformation or disinformation

Fake news, misinformation or disinformation is a global issue today, affecting all nations. While fake news is not a new phenomena, however the scale and reach of such news and the damage it can cause has increased significantly owning to the use of technology. The manifestation of the issue can be seen in elections, how people are swayed, etc. In fact there are reports of people being killed owing to spread of fake news. All this is affecting the safety, stability and resilience of the Internet and leading to a trust deficit among users.

While a variety of approaches has been adopted in different regions/localities to flight misinformation and fake news from content intervention (fact-checking and filtering), technical intervention (dedicated anti-rumor platforms, algorithm) to economic intervention (undermining advertising sources),legal intervention (civil and criminal penalties ) and etc with different stakeholders including state actors, NGOs, platforms, news media are involved yet more needs to be done to address this issue.

Following questions are being approached within the IGF 2019:

  • How effective are those approaches, what are the shared policy principles, norms and mechanisms across regions and nations?
  • What are the responsibilities of actors such as Internet platforms and government regulators?
  • What roles do technology (e.g. algorithm and bots) and others play in the process?
  • What are the best practices in light of freedom of speech and the necessary neutrality and legal certainty for platforms?
  • How can we restore the trust of the public to the Internet platforms, news media and politics? How can we hold the actors accountable for their interventions?

Statements

"Misinformation is a global issue and all stakeholders need to come together to address it and restore the trust of people on the Internet." (check the workshop)

AMRITA CHOUDHURY, CCAOI, INDIA

"Misinformation is a global issue and all stakeholders need to come together to address it and restore the trust of people on the Internet." (check the workshop)

Workshops at IGF 2019

Wednesday, Nov 27

Misinformation, Responsibilities & Trust


Workshops at Youth IGF Summit 2019

Sunday, Nov 24

16:50 – 17:35 h, Session, “Localizing Internet Governance”, Digital Grassroots, Uffa Modey + Eileen Cejas. Location: Spreepalais am Dom, Anna-Louisa-Karsch-Straße 2, Berlin. Register: [[1]]


16:50 – 17:35 h, Session, Session “Privacy vs. Security on the Internet”, Global Forum on Media Development, Michael Oghia. Location: Spreepalais am Dom, Anna-Louisa-Karsch-Straße 2, Berlin. Register: [[2]]



Videos(HMKW)





Media Reports

Did you report on this topic? Your valuable content belongs in this section. Share with the world your journalistic articles – insert the links to your media reports on “Next Generation Internet Governance”, regardless of the language.


Thursday, Nov. 28

Commentary: Tackling Hate Speech – Or Not

ByIan Marsden, HMKW, University Of Applied Sciences, Berlin

You can tell tackling hate speech is a sexy subject because it crops up every single day on the agenda at Internet Governance Forum (IGF) being held in Berlin this week. Which is not to say it isn’t an important subject just that whilst there is much hand wringingand outraged indignation from participants it’s difficult to see that any real advance is being made in the battle to control this nasty little corner of the internet.

Weary conference delegates stuffed with the day’s breakfast pastries trudge from meeting to workshop to group discussion and back to meeting all eager to agree that ‘something must be done’ about it and to try and reach a consensus. Conference organisers have puton an impressive array of speakers to help them do that - including regulators, civil rights activists, politicians, and representatives of multi-national tech companies.

The problem of course is with the aforesaid tech companies who are the bad guys here even though no one says that. Ingeniously by participating in this conference and other similar events, usually as panel speakers, and engaging with the people seeking to regulate them they have nullified or blunted opposition - and actually muddied the water as to their ability and responsibility to tackle the problem.

It’s not certain whether their success in this is due to people’s innate politeness in not asking awkward questions or whether the tech representatives are just clever public relations people but they are doing a fantastic job at pulling the wool over people’s eyes.

Part of the problem is that workshops sometimes are not really workshops - but panel discussion and its structure meant that audience members who wished to perhaps radically take the tech companies to task over their responsibility weren’t able to do so. Althoughconference organisers have put on an impressive array of speakers. For example at the workshop with the title “Tackling Hate Speech Online”.

After the panellists had spoken questions were gathered by the panel chair but by the time they reached the relevant panellist they were able to easily ‘not answer’ the question and there was little or no opportunity for the questioner to follow up.

The tension is that the tech companies know that the wild west in which they have hitherto operated is over and that public sentiment is that they should be held responsible for things such as hate speech published on their platforms. They are desperate not tobe regulated which would mean them spending a great deal of money on either technology and employing people to monitor content.

At the talk yesterday speakers included Laetitia Avia a member of the French National Assembly. She tried to pin a representative from Google down as to what the ‘10000’ worldwide moderators YouTube apparently employ actually do, what is their status, what guidelinesthey work to and what training is given to them. She said she had requested the information on a number of occasions but it had not been forthcoming and no further light was shed yesterday.

Meanwhile delegates and concerned parties collude by debating guidelines and allowing the discussion to be sidetracked into whether the tech companies are actually publishers or not. Another major concern appears to be the desire to ‘avoid a race to the bottom in attemptto achieve harmonisation’.

The trouble is whilst these are valid issues and no doubt worthy of debate the tech companies are being let off the hook by avoiding regulation and not being made to deal with the problem.


Trust your kids: Digital safety and the youth

By Elisa P. Junges, HMKW, University of Applied Sciences, Berlin

Restricting children's access to the Internet is not the answer to ensuring their digital safety. This was the consensus among experts at the Internet Governance Forum on Thursday on apanel addressing the safety of youth online.

According to UNICEF, we live in a society where one third of all Internet users are below theage of 18. Therefore, trusting and supporting the youth’s online habits is good practice."When parents take a primarily restrictive approach, children engage in fewer online activities and tend to have weaker digital skills. So, even though we understand parent's desire to take this approach, it has some costs", says Sonia Livingstone, professor of socialpsychology at London School of Economics. Being exposed to online risks is not the sameas being harmed, she stressed. "We may want them to encounter risks so they learn to copeand become resilient".

Learning to deal with the overwhelming amount of information, and exposure to harmfulcontent is relevant with the popularity of smartphones. The latest research conducted byinternational project ‘Global Kids Online’ reveals that 92% of children between nine and 17years surveyed in Brazil have access to the Internet through their mobile phones. Thenumbers are not very different in the majority of South American countries. “Given the levelof Internet access we have a high participation in social networks", affirms Daniela Trucofrom the Latin American wing of the UN.

The report further adds that in Uruguay, digital literacy has been part of classroomdiscussions. A government scheme provides a laptop to every child and teachers are taughthow to motivate children to explore and learn. Therefore, a notable observation is the highlevel of digital skills in children between the ages of nine and eleven. The case has provenhow government policy can influence children’s participation in the digital world.

“As adult policy makers we have a real obligation to listen very carefully to children and whatthey have to say. They are key agents in the process,” says Amanda Third, member of theTechnical Group of Asia-Pacific. “We need to move towards methods that are less restrictiveand don’t see children as a simple resource to mime their views and voices, but to actuallythink about consultation as an opportunity to encourage conversations with the youth.”

Allowing kids to participate in this debate was a recurring statement in the panel. The youthwill help to produce better policies as it concerns them, and in turn, they will acquiresophisticated knowledge on how to deal with the risks they could potentially face online.“Children are crying out for adults to trust them more deeply and to make policies, initiatives, and interventions that actually resonate with their experiences,” Third emphasizes. “If we give them that space, then we take a step towards a much better future in which policy is more effective and targets the actual needs of children everywhere.”



Internet governance: It is you and me

By Warda Imran, HMKW, University Of AppliedSciences, Berlin

The term is widely used, but only few really know its meaning: internet governance. On the internet conference IGF in Berlin, HMKW students asked top experts and contributors about their definition of this concept.

Vint Cerf, dubbed ‘father of the internet’ for creating the internet protocol TCP/IP on which the World Wide Web as we know it today was built, had a philosophical take on it. Referencing to the Biblical verse “Do to others what you would have them do to you”, Cerf said if everyone kept this in mind, the internet wouldbe a very useful place to be. He also mentioned a ‘list’ of things that can be done to make the internet safer. If only everyone took the words of the co-creator of the internet to heart, the internet would already be a peaceful space.

A technical perspective of the term was provided by Max Senges, Lead Researcher of Digital Communities at Google. He said it’s a multi-stakeholder practice: “It’s the process where we deliberate and define how the technological side of the internet is developed as well as the human political side of the net should be shaped.

”The notion that net governance involves multiple stakeholders was often repeated. Governments, big and small tech companies, civil society members - they all have a say in what kind of governance should be employed to the big black hole that is the internet. Punjab’s Finance Minister from Pakistan said that such forumsare a great learning experience as countries have a very diverse set of issues.

The commissioner of the newly set up Access to Information Commission in Afghanistan, Hamdullah Arbab, had a unique definition which personified some of the growing concerns of the global South. “Internet governance is a new phenomenon for Afghanistan. We are gradually progressing, we have 3G now and moving towards 4G. We don’t have e-governance laws and that’s a challenge for us,” Arbab said.

How can a debate on the internet take place when some countriesof the world are struggling to catch up? These issues must be addressed.

“Through internet we will have a good governance system inthe future,” Arbab adds. The hope surrounding Afghanistan’s growing digital footprint gives hope that these forums will one day actually achieve their motto of bringing the remaining half of the population online.

The compact definition by UNESCO’s Sasha Rubel only consistsof two words: “Collective intelligence.” Rubel signaled that governance could only be done in true unison, it’s not the intelligence of a few stakeholders, but of everyone involved and impacted by the internet. Further more issues like intellectual property,cybercrime, online safety and how do we penalize criminals globally have to be addressed.

Jimson Olfuye of Kontemporary Konsulting from Nigeria echoedsimilar concerns. “Internet governance is tackling issues like intellectual property, cybercrime, online safety and how to punish criminals globally while also taking on cross-border issues,” he said. His legitimate concerns around the West African countrywere echoing the concerns of the forum as a whole, but while conversation can only get us so far, action is imperative.

However, the implementation is tricky because there are manyconcerns of each country and action can only stem from debate and planning. Goran Marby, CEO of ICANN, said that internet governance is a concept influx which is constantly changing - and should be.Perhapsthe most humble definition came from Lilly Edinam Botsoye from Youth IGF in Ghana, “Internet governance is you and I shaping the internet, because everybody owns the internet and no one owns it.”

Wednesday, Nov 27

2019: The Beginning of the Era of the Deepfake? How Digital Manipulation can be a Threat to Democracy and Human Rights

By Peggy Whitfield, HMKW, University of Applied Sciences, Berlin

In early November, the prestigious English dictionary, Collins, named “deepfake” as one of its new words of the year for 2019; it’s certainly one of the buzzwords of the IGF conference in Berlin. But what exactly are deepfakes and what potential do they have to impact society and democracy? NATO Strategic Communications Centre of Excellence released a report entitled “The Role of Deepfakes in Malign Influence Campaigns” earlier this month, where the term was defined as “ … a portmanteau of ‘deep learning’ and ‘fake’, referring to the method used and the result obtained. Although most commonly associated in popular consciousness with video, a deepfake is any digital product - audio, video, or still image - created artificially with machine learning methods.” At the most basic level, deepfakes are created by deep learning algorithms which are made up of two artificial neural networks, one which produces data and the other which decides if it is realistic and believable. 

Whilst deepfakes have been on the radar of governments and the tech industry for some time, the concept only really hit public consciousness when a YouTube channel from a mysterious, anonymous avatar called Ctrl Shift Face – who claims to have worked in Hollywood in VFX for many years -  started releasing deepfake videos of famous faces transplanted onto other famous bodies. One of the most widely shared videos is the infamous “Here’s Johnny” scene from the classic horror film, ‘The Shining’. The film stars Jack Nicholson, but in this deepfake video the actor’s face has been replaced with that of Jim Carrey. To a casual observer who is not familiar with the film, it is very convincing.

Ctrl Shift Face’s videos are clearly labelled as deepfakes, but the concern is that malicious actors - be they state-sponsored or just nefarious individuals – may use the technology in more harmful ways than entertaining people on social media. This has caused many people to raise worrying questions, as Marianne Franklin from The Internet Rights and Principles Coalition explains:

"All this technology has been deployed and sold and downloaded without any kind of consideration about whether it is even acceptable to have in the first place. The trouble is the technology and what can be done technically is galloping way ahead of legislation, political representatives, high courts, middle management and universities. That is why the use of deepfakes in revenge porn is so disturbing.”

Franklin’s view is supported by legal realities; in a world where revenge porn is ubiquitous and often goes unpunished – it is not a criminal offence in most of Europe, let alone the rest of the world – technology has already moved on. The tech platform Reddit is full of sites where people are creating deepfakes, sometimes for innocent purposes, but after public pressure, the platform has also taken down sub-Reddits which were devoted to creating pornography using the faces of celebrities and ex-partners without their consent.

But what about the use of deepfakes in a political context? It is possible, of course, that they may have already been used and we simply did not notice. Thus far, there has been only one verified use of a deepfake in political campaigning in Belgium in 2018. A Flemish socialist party, sp.a, published a deepfake video of US President, Donald Trump jeering at Belgium for staying in the Paris Climate Accord. It seems to be a deliberately poor attempt as ‘Trump’s’ final words are “We all know that climate change is fake, just like this video.”

More worryingly, there are as yet unsubstantiated claims that a deepfake was used in Gabon late last year and may have triggered a coup. President Ali Bongo had not been seen in public for a while and then suddenly a video of a new year address was broadcast; many people were convinced it was a fake and the military staged an unsuccessful attempt to overthrow Ali Bongo a week later. These two examples clearly demonstrate that deepfakes are not the technology of the future, but today’s reality.

Whilst the aforementioned examples may seem to have had dramatic consequences, video deepfakes are the least of our worries, according to Sebastian Bay, a Senior Expert at NATO StratCom and project manager of the organisation’s recent report on deepfakes. Apparently, audio is more concerning:

“Imagine a world where you pick up the phone and the voice and the video call that you see, you cannot 100 percent know that that is really the person you are talking too. So you are talking to your mum, it sounds like your mum, it looks like your mum, but it could be a person just using the ‘mum filter’. That is a very scary future.”

Bay goes on to explain that people could use this technology to mimic journalists to extort sensitive information from sources or it could be used to defraud individuals on a massive scale. The deepfake report also concentrates on the effect that deepfake text may have on democracy, with trolling, especially by authoritarian states now able to achieve mass saturation:

“With the development of ‘fake text’, trolls could finally be replaced by automated systems for all but the most sophisticated interactions with real humans. This new automated astroturfing could result in a massive scaling up of operations, generating an entire fake public to influence political decisions.”

If we think Russian troll farms are an issue, it seems we may be at the very beginning of a terrifying journey down the disinformation rabbit hole. Damian Tambini, Associate Professor at the Department of Media and Communications at the London School of Economics, echoed these warnings at a disinformation seminar at the IGF earlier this week. He stated:

“This is a war on democracies by non-democracies. It is asymmetrical. If one of the participants in this conflict has genuine popular sovereignty, then it matters if the opinions of its citizens can be manipulated.”

Such developments in the disinformation war could really take fake news to a whole different level. How could any of us believe anything we see on the Internet? But some argue that whether or not deepfakes are used or not, effective methods deception can still be carried out in more technologically basic ways without AI, as Ctrl Shift Face argued in a recent interview with the publication ‘Final Boss’:

“You don’t need the cutting edge deepfake tech to fool people. All it takes is a Facebook post and [the] less intelligent majority will spread it like a wildfire because it confirms their bias … it’s happening all over the world and huge companies, mainly Facebook (which is the main platform for this) should find a solution to this and filter this stuff out.”

The recent viral footage of US politician, Nancy Pelosi, seemingly drunk, but actually edited and slowed down so her answers seemed slurred and disjointed - which Facebook refused to remove - seem to bear this argument out. Civil society were quick to condemn the video and the platform for allowing it to be widely shared; one Instagram user bill_posters_uk was so incensed, he created his own deepfake video of Facebook CEO, Mark Zuckerburg, talking about his plans for global domination, which proved to be wildly popular.

So it seems that deepfakes may just be part of the disinformation deluge that big tech – and thus the world – faces. Until platforms can be held accountable for the content they allow to be published and shared, we all remain at risk of being duped by or falling victim to deepfakes, as the NATO StratCom report states:

"The main objective of entities like Facebook and Twitter is generating profits rather than defending Western political systems … the platforms remain unwilling or unable to address the fundamental problem of malign influence overall, even in its most simplistic characterisation as   ‘fake news’.

#deefakes #metamanipulation #democracy #disinformation #bigtech #digitalrights #humanrights


Hate speech after Christchurch: A multilingual problem

By Warda Imran, HMKW, University Of AppliedSciences, Berlin

When the massacre in New Zealand’s Christchurch happened inMarch this year the internet community was alarmed. The attacker who killed 51 people live-streamed the bloodbath on Facebook. With this, a social media platform was misused to spread extremist and violent content online.

This kind of blatant abuse of social media tools was one ofthe central topics on a panel on the third day of the Internet Governance Forum (IGF). The debate featured voices from Facebook, Microsoft and representatives of countries far-flung. However, the most interesting contribution probably came from South Asiawhen the issue of multilingual hate speech was addressed.

“Companies like Facebook need to get off their high horse andwork with local linguists, researchers and civil tech, this research needs to be constant and it needs to transfer into policy making,” said Yudhanjaya Wijeratne, data scientist from Sri Lanka and founder of the fact-checking organization “watchdog” exclusivelyspeaking to HMKW students.

Wijeratne spoke of a “design problem in algorithms” which failsto touch upon hate speech delivered in languages that the algorithm is not trained in, including Sinhalese, Tamil, Urdu, Hindi and Afghani written in Roman English or in their original colloquial style. “There needs to be rigorous research into these languagesand how hate speech manifests and it’s not enough to just sit and analyze a few comments,” the data scientist adds.

Wijeratnes solution: a human moderator plus an automated analysis.This model assigns responsibility to the human in charge and provides ways of understanding outliers machines can’t learn while providing the scalability only machine learning systems can provide. The fact-checker is also involved in a study of nine millionwords of Facebook content around three major terrorist attacks to assess how hate speech is presented. “Is it in English? Is it in their colloquial language? Who does it target?” he asks. Findings of this study can contribute towards better machine learningand moderation.

Regardingthe speed and velocity at which extreme content spreads nowadays, Wijeratne said it is impossible for humans to regulate other humans. “Laws may be enforced in countries like Germany, but not Sri Lanka,” he added.


IGF panel defends legitimacy to discuss climate change

By Rachael Davies, HMKW Berlin, University of Applied Sciences

Disagreements sprung up on whether climate change is a suitable topic for a conference on internet governance, during a panel discussion on ‘Internet Futures and the Climate Crisis’.

Max Senges, Lead for Research Partnerships and Internet Governance for Google in Berlin, voiced his concerns on whether the Internet Governance Forum 2019 is the best forum for such discussions. He posed the question to the panel of climate activists: “Is it really the best use of our time to go after this 2% of emissions?”

This statistic refers to the contribution that the information communications and technology industry produces, encompassing emails, web searches, and online storage. Senges went on to say that perhaps a separate platform for debate should be created, outside of general internet governance.

This notion was challenged by multiple climate activists on the panel. “I think this topic is at the heart of internet governance,” said Marianne Franklin of the Internet Rights and Principles Coalition. “It’s impossible to separate the issues around the climate crisis from internet governance. We are the climate, this is our environment. We want the internet to survive, so the planet needs to survive.”

Lea Rosa Holtfreter, the panel representative for civil society, seconded Franklin, reasoning that the diversity and reach of the IGF is its greatest asset. “Bringing people from all the different sectors together, from the governmental sector, from the private sector, from civil society, is absolutely crucial”, she said.

The event took advantage of these different perspectives by breaking into smaller focus groups around specific topics, from misinformation and trolling online to intelligent, sustainable design. The flow of information and lively discussions around the room was evidence enough for the IGF being a viable forum to explore the relationship between climate change and the internet.“

An internet that runs on fossil fuel should be considered faulty,” argued Chris Adams of ClimateAction.tech. “It’s causing avoidable harm that we don’t need to be causing anymore.”


Tuesday, Nov 26

IGF: Die Jugend fordert mehr Cybersicherheit, Transparenz und Mitsprache

Monday, Nov 25

Cannot leave the next generation a web darker than today: Tim Berners-Lee

By Warda Imran and Veronica Sirianni of HMKW, University of Applied Sciences, Berlin

It almost sounds ironic: Tim Berners-Lee, the brain behind the World Wide Web, has put out a stern warning to not misuse his own invention. “We cannot leave the next generation a web darker than today,” he said on the first day of the fourteenth annual meeting of the Internet Governance Forum (IGF) in Berlin. The forum that is attended by 3,000 Internet aficionados is organised by the United Nations (UN).

Berners-Lee identified the flaws of his creation stating that abuse, harm and scams online have plagued the web. “Never before has the web’s power for good been more under threat,” he said to a packed audience. Vint Cerf, often dubbed ‘father of the internet’, shares this concern.  “When the internet was first developed it was just populated by engineers… when the general public gets access to a platform like this, it engages in many ways including abusive behavior and politics.”

As an antidote, Tim Berners-Lee unveiled a grand plan for “making the internet better”. His so-called “Contract For The Web” is nothing less than a “global plan of action to protect and build the web”, designed to shield it from state-sponsored hacking and attacks which are “unintended negative consequence of design”, he said, shirking liability. Berners-Lee had been working on this new manifesto since last year. The contract is being supported by 160 organisations including big tech giants like Facebook, Microsoft and many nonprofits such as Reporters Without Borders and Ranking Digital Rights. Apple, so far, has refused to join the initiative.

His work on the contract is aimed at finding a balance between freedom of expression and governance. Along with the German Federal Minister for Economic Affairs and Energy, Peter Altmaier, the duo explored “virtual walls” existing today and how they can fall. The year 1989, when the Berlin Wall came down, holds special importance for Berners-Lee: the lifting of the iron curtain and the birth of the web. Alluding to the sentiment around the fall of the wall, he said, “Not only did the bricks fall, so did the barriers of human connections”.

#IGF2019 #HMKW

Further Resources