Commission publishes EU Code of Conduct on countering illegal hate speech online continues to deliver results

Europe

Yesterday the European Commission released the results of its fifth evaluation of the 2016 Code of Conduct on countering illegal hate speech online.

The results are overall positive with IT companies assessing 90% of flagged content within 24 hours and removing 71% of the content deemed to be illegal hate speech. However, the platforms need to further improve transparency and feedback to users. They also have to ensure that flagged content is evaluated consistently over time; separate and comparable evaluations carried out over different time periods showed divergences in performance.

Věra Jourová, Vice-President for Values and Transparency, said:” The Code of conduct remains a success story when it comes to countering illegal hate speech online. It offered urgent improvements while fully respecting fundamental rights. It created valuable partnerships between civil society organisations, national authorities and the IT platforms. Now the time is ripe to ensure that all platforms have the same obligations across the entire Single Market and clarify in legislation the platforms’ responsibilities to make users safer online. What is illegal offline remains illegal online.

Didier Reynders, Commissioner for Justice, said: I welcome these good results. We should, however, not satisfy ourselves with these improvements and we should continue the good work. I urge the platforms to close the gaps observed in most recent evaluations, in particular on providing feedback to users and transparency. In this context, the forthcoming Digital Services Act will make a difference. It will create a European framework for digital services, and complement existing EU actions to curb illegal hate speech online. The Commission will also look into taking binding transparency measures for platforms to clarify how they deal with illegal hate speech on their platforms.”

The fifth evaluation shows that on average:

  • 90% of flagged content was assessed by the platforms within 24 hours, whereas it was only 40% of contents in 2016.
  • 71% of the content deemed to be illegal hate speech was removed in 2020, whereas only 28% of content were removed in 2016.
  • The average removal rate, similar to the one recorded in the previous evaluations, shows that platforms continue to respect freedom of expression and avoid removing content that may not qualify as illegal hate speech.
  • Platforms responded and gave feedback to 67.1 % of the notifications received. This is higher than in the previous monitoring exercise (65.4%). However, only Facebook informs users systematically; all the other platforms have to make improvements.

Next steps

The results obtained in the context of the implementation of the Code of Conduct over the last four years will feed into the ongoing reflections on how to strengthen measures whose objectives are to address illegal content online in the future Digital Services Act Package on which the Commission recently launched a public consultation.

The Commission will consider ways to prompt all platforms dealing with illegal hate speech, to set up effective notice-and-action systems.

In addition, the Commission will continue in 2020 and 2021 to facilitate the dialogue between IT companies and civil society organisations working on the ground to tackle illegal hate speech, in particular to foster the engagement with content moderation teams, and mutual understanding on local legal specificities of hate speech.

Background

The Framework Decision on Combatting Racism and Xenophobia criminalises public incitement to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin. As defined in this Framework Decision, hate speech is a criminal offence also when it occurs online.

In order to respond to the proliferation of racist and xenophobic hate speech online, the European Commission and four major IT companies (Facebook, Microsoft, Twitter and YouTube) presented a Code of Conduct on countering illegal hate speech online on 31 May 2016. Since then, Instagram, Google+, Snapchat, Dailymotion and Jeuxvideo.com joined the Code.

The Code of Conduct is based on close cooperation between the European Commission, IT platforms, civil society organisations (CSOs) and national authorities. All stakeholders meet regularly under the umbrella of the High Level Group on combatting racism and xenophobia, to discuss challenges and progress.

Each monitoring exercise was carried out following a commonly agreed methodology which enables to compare the results over time. The fifth exercise was carried out over a period of 6 weeks, from 4 November to 13 December 2019, by 34 civil society organisations and 5 public bodies which reported on the outcomes of a total sample of 4364 notifications from all the Member States (and plus the United Kingdom), except for Luxembourg, the Netherlands, Malta and Denmark. Notifications were submitted either through reporting channels available to all users, or via dedicated channels only accessible to trusted flaggers/reporters.

ec.europa.eu

pixabay

Leave a Reply

Your email address will not be published. Required fields are marked *