June 30, 2020 FACEBOOK RESPONSE TO EVALUATION ROADMAP: DEEPENING THE INTERNAL MARKET AND
CLARIFYING RESPONSIBILITIES FOR DIGITAL SERVICES
Facebook welcomes the opportunity to comment on the European Commission’s Inception Impact Assessment and
Roadmap assessing different policy options for reforming the regulatory framework for digital services operating in
Europe under the part on deepening the Internal Market and clarifying responsibilities for digital services. We
support the introduction of a harmonised EU framework for content regulation and we support regulation of illegal
and harmful content in the EU.
The European Commission is assessing three possible approaches in this context:
• A limited legal instrument would regulate online intermediaries’ procedural obligations, essentially making
the horizontal provisions of the 2018 Recommendation
• A more comprehensive legal intervention, updating and modernising the rules of the eCommerce
Directive, while preserving its main principles.
• Creating an effective system of regulatory oversight, enforcement and cooperation across Member States,
supported at EU level.
In our view, the ideal policy action would combine some elements from the Commission’s outlined policy scenarios:
• Better harmonised definitions;
• Recognition that both illegal and harmful content require action but that they are distinct and require
• Support for incentives for voluntary measures;
• Confirmation of the limited liability regime of the eCommerce directive;
• Harmonisation of notice and take down procedures;
• Safeguards in the area of fundamental rights;
• A systemic approach to the oversight of online platforms.
We also believe that the European Commission should take into account existing legislative instruments, and
especially the Audiovisual Media Services (“AVMS”) Directive and its approach to country of origin and to oversight,
legislation still under discussion, like the (draft) Terrorist Content Online Regulation, and upcoming instruments, for
example the EU Strategy for a more effective fight against child sexual abuse online. It is important to ensure
harmonisation amongst all of these, learning from the examples from the past and building towards the future of
II. Internal Market and COO principle
The EU Single Market remains Europe’s primary opportunity to achieve a leadership position in all industries, but
especially in digital industries where scale and market access are paramount. The freedom of establishment and
freedom to provide services across the EU has been fundamental for the development of the fully functioning
internal market. At the core of these freedoms is the country of origin principle, allowing companies to set an
establishment in the chosen Member State of the EU. This principle is currently under threat through various
national legislations (for example through initiatives such as the NetzDG law in Germany or now-defunct provisions
of the hate speech law in France) and the European Commission should contain those national initiatives by further
reinforcing the COO principle through the upcoming framework on Digital Services Act (“DSA”)1. This principle of
course does not affect Facebook’s compliance with valid requests to act under the national law.
The Digital Services Act can be an opportunity to create a better business environment that will help the EU
compete better with the US and China on the innovation and entrepreneurial front. There is no longer a separate
digital economy and traditional economy in Europe – digital tools are at the heart of how every sector and every
organisation operates. So is early digital education and digital skills, as well as digital literacy strategies more broadly
(such as Long Life Learning Programme or Women in Digital), which will be tackled in the forthcoming new EU’s
Digital Education Action Plan supplementing the EU recovery plan. A well-functioning, improved Single Market
allows innovative ideas to scale and spread across Europe and ensure that European consumers and (especially
small and medium-sized) enterprises across the 27 Member States can reap the benefits of digitisation by growing,
hiring more people, innovating, trading across borders and ultimately scaling their businesses. Digital transformation
remains of paramount importance in this context. Our services have enabled enterprises of all sizes to run
affordable and efficient marketing campaigns across the EU to find new commercial opportunities, helping to
generate international sales corresponding to an estimated EUR 208 billion in economic activity and an estimated
EUR 98 billion in exports last year (58 billion worth of exports are from international sales within the EU, and EUR 40
billion are from sales outside the EU)2.
Without necessary harmonisation and a deeper understanding of how to boost the Single Market, companies, and
especially SMEs, can face burdensome, and inconsistent compliance requirements, that can lead to a decrease in
innovation capacity, ability to scale and may lead some to relocate outside the EU. We believe that the country of
origin principle in the area of content regulation continues to be the right approach. This principle is not at odds
with the recognition that full harmonisation of content standards might not always be achievable or even fully
III. Liability regime for platforms
1 It is also clearly recognised in other EU content legislations, such as AVMS.
2 Copenhagen Economics Study - “Digital Transformation in Business”: https://www.copenhageneconomics.com/publications/publication/digital-transformation-in-business
The Internet landscape has changed significantly since the adoption of the eCommerce Directive that established
the so called limited (or secondary) liability regime. Nevertheless, this principle remains valid and we would strongly
support that it’s reflected in the DSA.
The eCommerce Directive established that provided online intermediaries act expeditiously once they obtain actual
knowledge of clearly illegal content, they cannot be held liable for their users’ content. Over the years, this limited
secondary liability regime for online intermediaries has been essential to the development of an innovative Internet
economy in Europe and the protection of freedom of expression. A strict liability regime holding online
intermediaries directly liable would have prevented a whole range of innovative services from entering the market
and would have resulted in over-removal of content. Given its importance for the functioning of the Internet and
fundamental rights protection, the limited liability regime should be preserved in the review of the framework
under the Digital Services Act, upholding the fundamental principle that individual users are directly responsible
under the law for their online behaviour and the content they publish until such time as the relevant online
intermediary has been put on notice of any clearly illegal content and failed to remove it expeditiously.
IV. Notice and action harmonisation
Facebook supports maintaining a notice-and-takedown regime for illegal content. The current regime provided by
the eCommerce Directive strikes an appropriate balance because it recognises that internet platforms do not
generate the underlying content they host (rather, the uploaders do) and do not possess the capacity and requisite
knowledge to comprehensively assess the legality of most content on their platforms, while also allowing for liability
to attach when certain conditions are met. This means that internet platforms cannot turn a blind eye if they obtain
knowledge of clearly unlawful content on their platforms. In fact, most platforms, including Facebook, have
mechanisms in place to accept and manage these “notices,” e.g., dedicated reporting forms or contact points.
However, the current framework lacks clarity and consistency around what these notices should look like, which
leads to longer processing times than necessary. In order to facilitate the expeditious removal of illegal content, a
notification should contain all the necessary information for the recipient to act without communicating further
with the sender. It might be desirable to establish the minimum information needed for a notice to be actionable
(such as unique URL, the alleged infringement type or illegality, and the status of the notifier).
Equally, given that the removal of illegal material is often essential in order to limit wider dissemination, the receiver
of the notice should have a clear policy available for handling notices and provide systematic feedback and updates
on the status of the notices to the notifiers, so that they can increase confidence that their notices will be
considered and, if appropriate, acted upon expeditiously. Such notification systems should be accessible to all actors
and easy to use.
While Facebook has robust notice-and-takedown procedures, in some cases making an assessment as to whether or
not content is unlawful under a particular country’s laws is extremely challenging (e.g., whether a review left about
a business is true and legitimate or whether it may be defamatory). These nuanced cases require bespoke analysis
of a particular country’s laws, defences, scope, underlying facts, etc. to determine whether the content is illegal, and
an internet platform is simply not able to properly make this assessment. If platforms could be held liable for such
content, there would be a clear risk of over-removal or censorship because the threat of liability could act as an
incentive to act upon all duly received notices, regardless of the legality of the underlying content. However, when it
comes to content that is manifestly unlawful, then it remains entirely appropriate and reasonable that people
seeking to have third-party content removed or restricted should be required to submit an adequate notice to the
platform, which can then quickly act upon the notice.
All notifications should be made in good faith. Those who are proven to persistently abuse “notice and takedown”
procedures by sending claims which have no legal basis (for instance, parties trying to abuse intellectual property
rights of a third party) should be held accountable and intermediaries should be permitted to ignore their notices on
the grounds that such notices do not convey “actual knowledge”.
Finally, an underlying element that is essential for a functioning notice-and-takedown regime is the partnership with
the notifiers. Facebook works very closely with law enforcement, right holders and other partners (e.g. civil society
and researchers under the EU Code of Conduct on countering illegal hate speech) to ensure our processes run as
smoothly as possible. We onboard them to our dedicated reporting channels, give them regular updates and
training on how to use those and embed their feedback on the reporting flow.
V. Illegal versus harmful content
We support EU level policy measures that require online intermediaries to have systems in place to address content
that is unlawful, as well as content that is lawful but nonetheless harmful. However, it is important to acknowledge
the differences between illegal and lawful but harmful content.
The reformed framework under the Digital Services Act should clearly distinguish between illegal, and lawful but
potentially harmful content. Harmful content is contextual, difficult to define, may be culturally subjective and is
often legally ambiguous. Harmful content should therefore not form part of the liability regime (and not be included
in the eCommerce Directive review). At the same time, it is desirable for society that online intermediaries have the
capacity to moderate and enforce against lawful but potentially harmful content according to their clear policies.
Not all content is suitable for all platforms and the communities they serve.
Any regulation looking at harmful content should focus on ways to hold internet companies accountable for having
certain systems and procedures in place to address harmful content rather than holding them liable for specific
content. This is the best way to ensure an appropriate balancing of safety, freedom of expression, and other
fundamental freedoms. By requiring systems such as user-friendly channels for reporting content or external
oversight of policies or enforcement decisions, and by requiring procedures such as periodic public reporting of
anonymised enforcement data, regulation could provide governments and individuals the information they need to
accurately judge online intermediaries’ efforts.
Other solutions should be accompanied with careful considerations. For example, requiring companies to hit specific
performance targets, such as keeping below the threshold the prevalence of content that violates a site’s hate
speech policies, or maintaining a specified median response time to user or government reports of policy violations.
While such targets may have benefits, they could also create perverse incentives for companies to find ways to
decrease enforcement burdens, perhaps by defining harmful speech more narrowly, making it harder for people to
report possible policy violations or stopping efforts to proactively identify violating content that has not been
reported. Alternatively, regulators could require that internet companies remove certain content— beyond what is
already illegal. In this case, regulation would need to clearly define that content and the definitions would need to
be different from the traditional legal definitions that are applied through an established judicial process where
there is more time, more context, and critical analysis of the relevant underlying facts.
Another important element with regard to tackling harmful content, as mentioned in the Roadmap document is
cooperation through partnerships. Facebook recognises its limits and looks to safety experts, academic researchers,
NGOs, human rights activists, and policymakers, for their expertise and guidance, as well as the feedback of the
people using our community. Partnerships are essential also when it comes to illegal content. The European Code of
Conduct on countering illegal hate speech online is a good example of how cross-industry cooperation and
partnership with civil society organisations can bring positive results in tackling illegal content online. The upcoming
DSA reform should promote these forms of collaboration.
VI. Legal incentives for voluntary measures
We do not want Facebook to be a platform for hate, abuse or exploitation, as we want to make sure that our users
are safe and that they trust and enjoy our platform. We consider it our responsibility to take active steps to tackle
abusive, harmful and illegal behaviour and content. Such steps should take account of the need to act swiftly to
remove the most harmful content while also accounting for the need to balance rights such as free expression and
access to information, particularly where content is not manifestly unlawful.
However, any new framework needs to clearly distinguish between the liability and responsibility of online
intermediaries. The law should continue to assign primary liability to users that put illegal or harmful content online
and limit the liability of online service providers (whose services are in fact being abused in these situations).
Primary legal liability should not attach for illegal content of which the platform is not properly made aware.
While intermediary service providers cannot be compelled by a Member State to provide general monitoring of
content or activities3, this should not prevent online intermediaries from taking voluntary steps to try to reduce the
prevalence of harmful content on their platforms. We welcome the European Commission’s acknowledgment that
platforms can face a dilemma when screening content, as taking these voluntary and responsible steps could expose
online intermediaries to increased concerns around liability. We would welcome that the European Commission’s
legislative approach consider options on how to alleviate this situation.
A number of service providers currently engage in voluntary measures to better enforce their terms of service or to
protect users. However, any voluntary measures under the current framework contain some inherent risks:
exercising too much control can compromise the neutral status of the intermediary service provider and, as a
consequence, deprive them of the limited liability protection. Furthermore, there is no provision that would protect
the service provider from liability should their voluntary measures prove imperfect. This potential lack of protection
3 Article 15 of the eCommerce Directive
could hamper any voluntary enforcement actions on the part of platforms. To address this, the reform in the
upcoming DSA should provide companies with clear rules and responsibilities that do not disincentive and instead
empower and encourage these voluntary actions to limit, and if possible prevent, distribution of illegal content
online. This would enable smaller platforms and startups to develop practical solutions that suit their scale.
The European Commission has defined “voluntary actions” in the context of internet intermediary liability as “good
practices for preventing, detecting, removing and disabling access to illegal content so as to ensure the effective
removal of illegal content” or “proactive steps to detect, remove or disable access to illegal content” (see
Communication COM (2017) 555
dated 28 Sept. 2017 at p.3). However, it is not currently integrated into the
eCommerce Directive and we would welcome this being integrated into the Digital Services Act.
Built-in safeguards would be required to ensure that measures taken under the voluntary framework would not
compromise service providers’ limited liability. This would reconcile responsibility with online service providers’
freedom to conduct a business, the need for legal certainty for both private sectors and competent authorities, and
to ensure that service providers are not perversely incentivised to interfere with their users’ fundamental rights. VII. Transparency & oversight
Transparency plays a pivotal role in any accountability model powered by regulation. This is best achieved, in our
view, by establishing a requirement for regular reporting consisting of a combination of clear, quantitative metrics -
such as prevalence- and qualitative analysis by which companies’ systems are judged. Companies should be held
accountable for the effectiveness of their systems, rather than holding them liable for each individual piece of
content. While one-off events or individual pieces of content will test the effectiveness of the system, one-off
events should not be the focus when it comes to assessing compliance. Cooperation is needed when such one-off
events take place to help form guidance on best practice for systems. Transparency is key to building solutions for
content moderation within the diverse network of stakeholder which unlike targets, transparency can provide
oversight and accountability without necessarily distorting the behaviour of platforms or creating unintended
consequences (such as a potential chilling effect on free speech).
There is currently no regarded set of benchmarks for auditing content moderation systems, which is needed to
develop trust in the transparency reports, including verification on data regarding how much content is removed
from the platform, the reasons for doing so, how much content was identified through proactive means and how
often the content appeared on the platform. An independent auditing system, would also ensure that reporting
could provide verifiable understanding of the effectiveness of moderation systems as well as the regulatory
framework; It would be able to address the challenge of needing to design a transparency and accountability system
that is flexible and responsive to the evolving nature of the range products and services in the digital market. For
this to be achieved, recognised audit benchmarks for an independent auditing would need to be established,
potentially via a combination of expert groups and non-governmental groups which would allow for independent
auditing of any transparency reports by platforms against their standards.
A systemic approach can create the incentives for platforms to tackle quickly and efficiently the illegal content on
their platforms, through appropriate systems, and enable the authorities to assess the efficiency and effectiveness
of systems put in place. This approach would allow platforms to develop the systems needed to address regulatory
concerns rather than imposing a one-size-fits-all approach that may be incompatible with how some platforms
There is a value in exploring a single EU-wide coordinated oversight model, be it a body or a process within the
current institutional setting, that would enhance legal certainty by providing guidance to consumers and companies,
help the latter take reasonable, feasible, and proportionate measures and ensure protection of fundamental rights.
The oversight mechanism should not interfere with responsibilities within the jurisdiction of the Courts.
This kind of system is already foreseen in the Audiovisual Media Service Directive (AVMS), where the regulator is
indeed asked to assess policies, systems, reporting and redress mechanisms that platforms have put in place, when
assessing platform’s potential liability. Combined with a co-regulatory model to tackle harmful content, the AVMS
can be a model of potential framework for both illegal and harmful content.
VIII. Promoting and safeguarding fundamental rights
Facebook supports the idea of an updated EU regulatory framework for online content that ensures companies are
making decisions about online speech in a way that minimises harm but also respects the fundamental right to free
Currently, the eCommerce Directive merely mentions in its preamble that ‘the removal or disabling of access has to
be undertaken in the observance of the principle of freedom of expression’. At EU level, there are no established
guidelines with regard to the implementation of notice and action and the Directive leaves the subject matter to the
discretion of the Member States. Most of the countries, however, didn’t introduce any safeguards or self-regulatory
measures. Rules established so far through jurisprudence, even though generally accepted by the European Court of
Human Rights, have certain limitations. For example, their scope is defined only by the specific examples that have
already been tried by the courts.
When regulating content that is harmful but not illegal, careful consideration of the impact on the protection of
fundamental rights is even more essential.
International human rights instruments such as the International Covenant on Civil and Political Rights (ICCPR) and
the European Convention on Human Rights (ECHR) provide the best starting point for analysis of regulatory efforts
to restrict speech. Article 10 of the ECHR carves out room for governments to pass speech laws that are necessary
for the “interests of national security, territorial integrity or public safety, for the prevention of disorder or crime,
for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the
disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary.”
Laws governing speech should weigh in different societal interests, with due respect for human rights and with the
check of an impartial judiciary. Drawing a clear line between legal and illegal speech helps citizens to know what is
and is not permissible and platforms to easily comply with the law with limited room for interpretation, at scale,
without undermining the goal of promoting expression4. Additionally, judicial checks and balances help to reduce
the likelihood of capricious restrictions on legitimate speech.
This short feedback paper is the start of an ongoing conversation we look forward to having with the European
Commission and other EU policymakers.
For more information, please contact:
Managing Public Policy Director, EU Affairs, Facebook
4 This was also reflected in the recent judgement of the French Constitutional Court on the Law on Countering Online
Hatred, link https://www.conseil-constitutionnel.fr/decision/2020/2020801DC.htm