EU Influence on U.S. Internet Law, Policy and Practice in the field of digital advertising

Presentation to the Yale Law School / Floyd Abrams Institute conference “Commercial Speech and the First Amendment 2020” 

2 June 2020

Introduction

In keeping with the subject of today’s conference, I will focus on the regulation of online advertising, talking briefly about the existing situation and then focusing on what’s coming next. I will occasionally zoom out to look at EU regulation of online platforms more broadly.  

What a time it is to be talking about platform regulation! We have no Donald Trump in Europe, but we do have lots of other presidents. The techlash over here is almost as heated, and the basic issues around platform responsibility are similar: should tech companies be doing more, i.e. taking greater responsibility for keeping their platforms safe and lawful, and for dealing with the negative externalities of their success? Should they be doing less, e.g. to interfere with their users’ speech and/or personal data? Commercial speech and political advertising are a core part of the debate. As in the US, the European debate is throwing up a broad range of concerns and demands, which are diffuse and often contradictory, but which are also so heated and repeated that some kind of additional regulation is all but inevitable. A major new EU legislative initiative launched just this morning, which I will get to.

The idea that there might be such a things as EU influence on US law, policy and practice in the field of digital advertising, is based on two things: 

  1. EU digital rulemaking has a habit of directly or indirectly influencing US policy discussions – we have seen that in the field of privacy, and the EU is explicitly seeking to promote what it calls “its model of a safe and open global Internet”. EU initiatives in areas such as online content moderation the potential to set at least some kind of standard.
  2. More practically and immediately, the EU has no qualms about having its rules apply to non-EU companies that process data of EU citizens, or generate revenue in the EU. EU regulations in this field are, to a significant extent, aimed squarely at the big US tech companies (it increasingly includes TikTok, but then TikTok is increasingly becoming American). Some of these regulations were conceived as targeting quite specific platforms, although they ended up with definitions that potentially cover a much broader range of services. You still see many US media outlets essentially EU-fencing their websites to avoid having to comply with the GDPR, but that is not a practical option for most US tech platforms. Once you have major commercial interests in the EU, you have to find a way to comply with EU regulations. It is no coincidence that many recent decisions of the EU Court of Justice have the big US tech companies in their case titles: Apple on tax and commercial contracts, Amazon on online marketplace liability and consumer protection, Airbnb on real-estate regulations, Uber on transportation regulations, Google and Facebook on speech, search, telecoms & privacy regulations.

So EU regulations are both directly and indirectly relevant to US tech and media companies, and therefore to US tech and media law practitioners. For the same reasons, it helps to know what’s coming down the European pike. And the take-away from my presentation is: the road ahead may not always be pretty, but it is going to be busy.

Current rules

To provide some very rough context: current rules affecting online advertising, and advertising platforms, are spread over a wide range of EU measures. 

For advertisers, there are directives for example on unfair commercial practices, and comparative and misleading advertising. Many EU countries have additional laws. They also tend to have self-regulatory codes on the content of advertising, that are formally non-binding but nonetheless effectively set the rules. These rules are generally quite detailed, and quite focused on consumer protection – at the expense of commercial interests and, some would say, freedom of expression. If you enjoy watching ads for prescription drugs, you’re not going to enjoy Europe. And if you want to run an ad in the Netherlands for an alcoholic beverage that features packaging that depicts a sports activity – well you can’t (although thankfully you can depict sports activity in the ad, as long as it is there to provide context for post-activity celebrations). Yes, people actually litigate these provisions.

For broadcasters and VOD services and their advertisers, the starting point is the Audiovisual Media Services Directive. This contains quantitative limits on TV advertising (currently 12 minutes per hour, soon changing to 20% per day). It prohibits media content that incites hatred, and protects children against harmful content. It requires that advertising be recognisable, and separate from editorial content, and regulates product placement and split-screen advertising. The directive restricts advertising for alcohol, and outright prohibits surreptitious advertising, advertising for tobacco and prescription medicines, and advertising that causes physical or moral detriment to minors.

For online advertising platforms, the starting point is still the Electronic commerce directive from 2000. The broad safe harbour for hosting services has mostly protected ad platforms from liability for unlawful ads, but not from injunctions. These injunctions can order platforms to do all sorts of things, including taking down unlawful ads, providing subscriber data and, on occasion, preventing unlawful ads from appearing in future for example through automated filters.

In the online context especially, privacy rules are an increasingly important part of advertising regulation. The GDPR does not contain specific rules on micro-targeting, but it does make it more difficult than it is in the US. The GDPR sets a range of broad data protection principles (such as transparency, purpose limitation, data minimisation). It sets extra-strict rules on the processing of sensitive types of information such as information about a person’s political opinions, health, race, sexual life, etc. Processing that kind of data is essentially prohibited absent specific consent, though there are some narrow exceptions, e.g. for political parties to use personal data of current or former members or people who have been in regular contact with the party.

The relationship between the GDPR and freedom of expression is not well defined. Article 85 says that member states “shall by law reconcile the right to the protection of personal data pursuant to this Regulation with the right to freedom of expression and information,”  but gives little indication of what that means in practice. The GDPR is a broad, horizontal regulation, and though its drafters had online marketing in mind, the interface with political speech is entirely undeveloped. 

The context for all these EU measures is the EU Charter of fundamental rights, which recognises and protects several dozen rights, including freedom of expression, but also privacy and data protection, intellectual property, freedom to conduct a business, right to an effective remedy. The basic EU approach is that all fundamental rights rights are equal in principle, and when they collide a fair balance must be found based on the circumstances of the specific case. That sounds nice in theory, but is hard and unpredictable in practice. 

In general, both political and commercial advertising are protected by the freedom of expression. However, the protection for political advertising is stronger. Having said that, political speech is one of those areas where European harmonisation is quite limited: major differences exist between different countries. Some countries outright prohibit paid political advertising on broadcast media (UK, Ireland, Germany). These generally don’t apply to advertising on social media, though the costs of such ads can be relevant to campaign expense restrictions. Other countries regulate political advertising only in the run-up to elections (France). Some have no rules at all. 

There are almost no EU-level rules on political advertising, with the possible exception of a regulation on privacy compliance by political parties campaigning for the 2019 elections to the European Parliament – a topic so obscure that the EU managed to adopt a regulation without anyone noticing. In the run-up to those elections,  platforms including Facebook, Google, and Twitter signed up to a voluntary EU Code of Practice on Disinformation, making a range of commitments on, for example, scrutiny of ad placements and transparency in political and issue-based advertising. These platforms have all created ad repositories, and political ad transparency tools.

Recently adopted legislation

Let’s turn to what’s coming next. I’ll start with three pieces of legislation that have already been adopted and will be coming into effect shortly, and then move onto rules that are still being made or considered. 

P2B Regulation

Rules passed, but not yet in force, include, first of all, the Platform to business regulation, or regulation on promoting fairness and transparency for business users of online intermediation services to give it its fancy title. This regulation applies from 12 July 2020 (i.e. six weeks from now). It gives businesses that sell goods or services through online platforms some specific new protections around contractual terms, transparency, access to data, and complaint mechanisms. Both platforms and search engines will have to inform businesses about ranking parameters, and search engines will have to provide some degree of transparency around self-preferment of their own products and services. 

The regulation is a bit of an overlobbied compromise mess, so it might take a few years to figure out what impact, if any, it has. To a US lawmaker figuring out ways to regulate the big platforms, especially around B2B e-commerce issues, this regulation is a logical place to look for inspiration. 

AVMSD revision

Secondly, a change to the Audiovisual Media Services Directive was agreed in late 2018, which has to be implemented by EU member states by 19 September 2020. The most interesting change is that, while the directive previously applied only to linear broadcasting and on-demand services, it is now being extended to so-called video-sharing platform services. These are services featuring user-uploaded video content over which the service provider does not exercise editorial responsibility, but which it does organise. That definition clearly targets YouTube, though how many other platforms it touches will be a key issue going forward. The idea is to create a more level playing field, in terms of consumer protection, between centrally programmed video services such as Netflix, and UGC video services such as YouTube.

The new rules specifically concern the protection of minors, and certain types of per-se unlawful content. The rules oblige video-sharing platforms to “take appropriate measures” to protect users from uploaded content – including advertising – that is harmful to minors, or which contains incitement to violence or hatred, provocation to commit terrorist offences, child pornography and offenses concerning racism and xenofobia. Where platforms sell their own ads, these have to comply with the same rules as ads on VOD platforms. With respect to third-party ads, video-sharing platforms have to “take appropriate measures to comply” with these rules.

The key term is here “appropriate measures”. How hard do video-sharing platforms have to work to keep bad ads, and other bad content, off their platforms? The directive makes it clear that these measures do not relate to exercising control over the content of third-party uploads, but over its organisation and presentation. It provides a list of examples, including self-certification by uploaders; flagging, rating and reporting tools; parental controls; and media literacy and awareness programs. 

It will be interesting to see how these rules are applied by platforms in practice, and how they are interpreted by regulators – in particular the Irish regulator, given that most of the big platforms have their European entities there. There is considerable room for debate on how broad the definition of ‘video-sharing platform service’ is, and how hard platforms need to work to keep bad content off their platforms. 

Consumer omnibus

Finally, there is a large-scale revision to a bunch of EU directives in the field of consumer protection – catchily called the Consumer omnibus – that will have to be implemented in national laws by 22 November 2021. Most of it is tangential to online advertising, but it does include provisions e.g. requiring paid-for search results to be clearly identified as such, and extension of a range of transparency rules to online B2C marketplaces.

There are other pieces of important EU legislation impacting platforms that are currently being transposed into national laws, most notably the Copyright Directive (where there is some pretty crazy stuff going on), and also (slightly less excitingly) the European Electronic Communications Code. But they’re not really related to advertising, so I’ll leave them for another day. 

Future legislation – the Digital Services Act

That brings us to future legislation.

There is no such thing as an executive order in EU law, but the legislative machine is very much gearing up. Just this morning, the European Commission (which initiates EU legislation) published a consultation on the so-called Digital Services Act. The initiative looks to revise and expand the Electronic commerce directive which, for the last two decades, has been the bedrock of EU internet regulation, including the broad safe harbours that have allowed the big US platforms to grow and prosper – unfettered, its critics argue, by serious responsibilities. The consultation heralds a big, complex, emotive, multi-year lobby fight, on par with the ones over the GDPR adopted in 2016 and the Copyright Directive adopted last year. 

Although there is also another piece of EU legislation pending that is relevant for online advertising, the ePrivacy Regulation, I want to focus on the DSA because it is a much broader, more ambitious project: nothing less than the creation of “a modern rulebook for digital services” as one of the responsible commissioners calls it.

The DSA consultation launched today focuses on a number of issues and themes.

  • Illegal and harmful content: The Commission is looking for information on experiences with illegal content such as hate speech and counterfeit, and with harmful-but-not-illegal content such as disinformation, including experiences with content moderation.
  • Responsibilities of platforms and other digital services: The Commission is soliciting ideas on the extent of platforms’ responsibilities in relation to illegal and harmful content. There’s a ton of detailed questions around different kinds of cooperation obligations for different kinds of platforms in relation to different types of bad content. 
  • Reviewing the safe harbour: The Commission wants to know how the current E-commerce Directive liability framework is working and where upgrades are needed for a number of digital players such as social media, search engines, online market places or cloud storage providers. There are important questions around whether the safe harbour for hosting services if sufficiently clear (hint: it isn’t), and around the Good Samaritan problem, which is that platforms that moderate content risk disqualifying themselves from the safe harbour for being too active, and thus being liable for any content they keep up. CDA 230’s approach to moderation, which is to immunize platforms both for moderation and for non-moderation, is not a part of the ECD. Removing disincentives for voluntary action is a key part of the Commission’s thinking.
  • Perhaps the most radical issue is around Gatekeeper platforms: there is a concerted push from a number of member states and stakeholders to create a new regime of ex ante rules targeting particular large platforms. This is essentially mandating regulators to set and enforce specific rules for specific big platforms. The precise definition is still very much up for debate: systemic platforms / dominant platforms / platforms with significant network effects / platforms which play a gatekeeper role. The consultation asks companies to describe their issues with big platforms, and get into the implications, definitions and parameters of notions such as “market power” and “gatekeeper power” of online platforms such as app stores, online marketplaces, operating systems and search engines. We’re talking about specific issues such as data portability, barriers to entry, terms and conditions for business relations with platforms, but this has the potential to become much bigger. European telecoms firms are just beginning to emerge from 25 years of very detailed ex ante regulation, and there is considerable potential for the EU to attempt something similar with online platforms.
  • The “other emerging issues” chapter is almost entirely concerned with Online advertising, both commercial and political. From the consultation, it’s clear that the European Commission is especially focussed on all sorts of ad transparency issues, and the sale of counterfeit and other dangerous or unlawful goods via online platforms. But the final question is “Are there any other emerging issues in the space of online advertising you would like to flag”, so this could still go in many directions.
  • Gig economy platforms: The consultation also seeks input on ride-hailing, food delivery and domestic work platforms, including on the terms and conditions set by the companies (fees, allocation of liability in terms of damage…) and on potential health and safety risks.

The public consultation is going to run for three months, but there are separate roadmap exercises, on the responsibilities for digital services and the ex ante instrument, that require input by June 30. Companies doing online business in Europe should probably consider submitting a response: if the copyright fight is anything to go by, sitting out the debate, or leaving it to broad industry groups, is not going to be the best way to be heard. We should see a formal legislative proposal from the European Commission at the end of the year. 

In short, while the Commission’s plans are probably not set in stone, the direction of travel is pretty clear. As one of the roadmap documents puts it, the intention is: “to reinforce the internal market for digital services, to lay down clearer, more stringent, harmonised rules for their responsibilities in order to increase citizen’s safety online and protect their fundamental rights, whilst strengthening the effective functioning of the internal market to promote innovation, growth and competitiveness, particularly of European digital innovators, scale-ups, SMEs and new entrants.” Whether you can do all those things at the same time very much remains to be seen. For more on this, and a broader overview of the European approach to platform regulation, I would refer you to my MLRC article that is in the reading material.

Remember this is just the Commission’s thinking. 27 Member States, the European Parliament and hundreds of stakeholders are going to be pulling this in all kinds of directions, most of which towards making the rules stricter. By means of a prominent example, the French government has already come out with its two priorities: 

  1. “An economic ex ante regulation of structuring digital platforms to tackle problems deriving from [their] exorbitant market power,” and
  2. “to strengthen at EU level the responsibility of platforms, given the significant risks users face with regards to access to illegal or dangerous content and products.

Given the aim of “ensuring that digital service providers present in the Union act responsibly to mitigate risks emanating from the use of their service, respecting Union rights and values, and protecting fundamental rights”, we can expect the idea of imposing overarching “duties of care” to get another serious outing. The Commission refers to “harmonising a set of specific, binding and proportionate obligations, specifying the different responsibilities in particular for online platform services”. Arguably, this notion that platforms have general responsibilities, duties or obligations is already implicit in some recent CJEU and national case-law, and it has been formally proposed in the UK. It sounds very reasonable to focus on a platform’s overall approach to preventing harm rather than each individual decision. However, operationalizing it gets messy quite quickly, as readers of Daphne Keller’s recent blog posts on the idea will understand. Indeed, we may start getting a glimpse of how messy that gets this autumn, when video platforms and regulators start grappling with the “appropriate measures” standard under the revised AVMS directive. If the DSA ends up making a platform’s right to invoke a safe harbour conditional on meeting some kind of systemic duty of care, that would incentivize them to block risky content more broadly, and still leave them open to additional injunctions in specific cases. I think Allen Dickerson made a similar point in the previous panel about the quantum of content that ends up getting blocked.

So is it all bad news for the big US platforms? Mostly, yes, if you consider more regulation to be bad news – though to be fair, some platforms have increasingly been arguing for more rules, particularly in the sphere of content moderation. One potential upside of the Commission’s plans is that they want to do away with all the various national laws that have been springing up over recent years, e.g. in France and Germany, and replace them with a single European rulebook. It also wants to strengthen the country of origin principle, which is good news to platforms who are quite satisfied with life under Irish regulation. Finally, the Commission promises to increase legal certainty, which would be a good thing. However, as Mike Tyson might have said if he had been an EU platform regulation lawyer, everyone plans for their legislative proposal to increase legal certainty until they get punched in the mouth by Brussels compromise politics.

Concluding remarks

I want to end with some remarks on the oddities of EU rulemaking, and first of all, on the overall European approach to freedom of expression. 

First Amendment sensitivities – or lack thereof

From a distance, the debate in the US on internet policy often seems almost paralysed by whether a particular measure would be compatible with the 1st Amendment. That spectre does not hang over the EU policy debate. Everyone recognises that freedom of expression is fundamental and important, but there is also a broad recognition that it is not absolute, and should be balanced with other fundamental interests. This idea of fair balance takes some getting used to if you come from a place where freedom of expression outweighs all other fundamental rights (although a European jester might note that free speech beats off a privacy claim easily enough in the US, but can whither quite quickly before a copyright claim). 

  • This is not the forum to discuss the two approaches in detail, but I will note a specific problem to the EU’s “balancing” approach when it comes to freedom of expression. All these other interests have their own detailed regulations at the EU level: the 55,000-word GDPR, dozens of IP directives, sprawling telecoms and consumer protection rulebooks, and just wait until you see the DSA. Each directive or regulation starts with recitals emphasising a “high level of protection” for the subject of the directive before going on to operationalise that in detailed provisions. There are dozens of EU directives which reference and laud freedom of expression before going on to limit it in various ways. For all that, there is not a single EU directive on freedom of expression. No instrument that emphasises a high level of protection for free speech and then goes on to operationalise that in detailed provisions. Freedom of expression is everywhere in EU digital policy making, but it is nowhere central. 

This willingness in Europe to limit freedom of expression, for example to protect privacy interests, has regularly caused First Amendment lawyers to wax lyrical to me about why their ancestors got on the Mayflower. Whatever your views on the EU  approach, it is helpful to bear it in mind when considering EU policy or interacting with EU policymakers, because the 1st Amendment is a real source of US-EU misunderstanding in tech policy. EU politicians tell each other weary jokes about US tech companies who come to lobby them and complain that some legislative initiative is bad because “it would violate the 1st Amendment”. Their (generally unspoken) response to that is: the 1st Amendment is your problem, not ours. Put bluntly, Europe does not care about the 1st Amendment. It cares about freedom of expression, but not as absolutely as a lawyer trained in the 1st Amendment might expect. 

The peculiarities of EU rulemaking 

From a US perspective, EU rulemaking can be perplexing at the best of times: 

  • An unfamiliar range of legal instruments, such as regulations and directives and guidelines, with quite different functions and effects
  • a bewildering cast of elected and unelected politicians, many of them with “President” in their job title (if you think one President expressing an opinion on platform regulation is trouble, Europe will give you nightmares)
  • a byzantine legislative process that makes the US Congress look streamlined, rational and transparent (think amendment exchange followed by conference committee, but then between three institutions, with 27 national governments and parliaments, and armies of stakeholders, breathing down their neck); 
  • and then, once a law has finally been adopted, the realization that nothing has actually happened yet, because 27 separate EU member states still have to implement the new EU rules in their national law. There is generally around a two-year gap between the adoption of an EU measure and its actual application at national level. As I described, we’re currently in that weird in-between stage for a range of important EU laws. There are difficult legal questions around all the obvious permutations: member states which transpose too soon, too late, too little, too much, etc. 
  • And even when rules have been transposed into national law, it turns out that national legislators and courts actually have little say in what EU rules mean, which means you have to wait several more years until cases work their way up through national courts and then on to the CJEU – whose decisions can be (shall we say) challenging to interpret.

These complexities are magnified in the context of EU rulemaking in the digital sphere – there is high media and political interest, and the competing interests at stake are enormous. The lobbying is massive, and often vicious. That often begets an unfathomable compromise of wordy recitals and delphic, contradictory provisions. It then falls to the EU Court of Justice to ‘interpret’ (aka make sense of) these rules, one question at a time. They say a camel is a horse designed by a committee. That’s always been unfair to camels, but by the standard of recent EU legislation, camels are particularly beautiful, intelligible and practical creatures. As for the Digital Services Act, we’re probably gonna need another metaphor.

The EU legislative system is very far from perfect, and if anything one of its faults is that it is default set to produce legislation, even if it has potentially far-reaching effects that haven’t all been fully thought through. There are literally dozens of digital initiatives of various kinds percolating through the Brussels legislative machinery, and while not all of them will get onto the statute book a serious number of them will. So buckle up, we’re in for quite a ride.



Leave a Reply