The Court of Justice of the European Union recently issued a decision in X v. Russmedia which, despite not grabbing headlines, has the potential to fundamentally upend the internet — and to challenge how U.S. digital services operate in Europe. The CJEU may have just introduced obligations for websites and platforms to verify user identities, proactively scan and block content, and force third-party sites to take content down — all backed up by the threat of EU General Data Protection Regulation fines and litigation.
Background
The case started through a dispute between a Romanian woman on a Craigslist-style marketplace site in Romania called Publi24.ro operated by Russmedia. An unknown third party posted a fake ad about the woman, claiming she offered sexual services — while also posting her real photo and phone number in the ad. The plaintiff complained to Russmedia; it took down the posting within an hour. However, the ad remained available on other third-party sites.
This notice-and-takedown procedure complied with the EU's e-Commerce Directive, the e-commerce law in effect at the time — the predecessor to the Digital Services Act. The DSA retains the notice-and-takedown procedure and, like its predecessor, exempts platforms from liability if they promptly remove content after being notified that it infringes on others’ rights.
Despite this, the plaintiff sued Russmedia, claiming the fake ad contained sensitive data about her had been published without her consent — thus violating the GDPR. The Romanian courts initially ruled Russmedia was not required to pre-check user content for lawfulness, and thus not liable. But a Romanian appeals court referred the case to the CJEU to clarify what takes precedence: the GDPR or the DSA's liability exemption for platforms that follow the notice-and-takedown procedures.
Observers who follow the GDPR and DSA may wonder why the CJEU even took this case. Doesn't Article 8 of the DSA state that platform providers have "(n)o general obligation to monitor" user-generated content for lawfulness? Without this general monitoring duty, don't both the eCommerce Directive and the DSA effectively grant hosting providers an exemption from liability for unlawful user-supplied content, provided it's removed when prompted? Finally, doesn't Article 2(4) of the GDPR itself say the GDPR is "without prejudice" to the e-Commerce Directive and DSA — and in particular to the liability exemptions they give to platform providers? Perhaps sensing this tension, the CJEU's advocate general argued for a cautious approach: platform operators should face no GDPR liability as long as they act as "neutral hosting providers" — as opposed to generating content themselves.
The CJEU decision
Surprisingly, however, the CJEU took a different, far more GDPR-heavy approach. It held that Russmedia was a joint controller of personal data contained in the ads it lets users post. This was based on the court's perception that Russmedia provides the platform and sets the parameters within which ads were published for its own commercial purposes. Its primary evidence for this finding was IP-related boilerplate in Russmedia's website terms — i.e., the clause giving it the right to use, distribute, transmit, reproduce, modify, translate, transfer and remove user-generated content at any time. Thanks to this, the CJEU held Russmedia "exert(s) influence, for its own purposes, over the publication on the internet of personal data" contained in user-created ads.
Having found that Russmedia was a joint controller of user-generated content, the CJEU extrapolated on what this means for platform operators. Since sensitive personal data cannot be processed without consent, the CJEU required online platforms to do the following: proactively identify user content that contains sensitive data before posting it, without waiting for complaints or reports from users. They will then need to verify the identity of the user posting that content, to confirm that he/she is posting his/her own sensitive data. Next, if the user is not posting his/her own data, refuse to post the content until the user can demonstrate he/she has consent from individual to whom the data relates. Finally, the online platform must impose technical measures to remove "unconsented" content from other sites that have posted it.
These were major, transformative findings. Thanks to the eCommerce Directive and the DSA, the paradigm for decades had been, in effect, "platforms don’t become responsible for user-generated content until someone notifies them it's problematic." In contrast, the CJEU's paradigm was: "The GDPR makes platforms just as responsible for user-generated content as the users who post it."
Russmedia intuited the court's requirements could upend how it and other platforms operate and argued to make this point before the judges. But the court seems to have viewed the above as a straightforward GDPR application that poses no serious practical challenges. It argued the DSA prohibits imposing "general monitoring obligations" on platforms. The court countered: finding user content that contains sensitive data "cannot … be classified as a general monitoring obligation." Russmedia pointed to the GDPR's own statement that it's "without prejudice" content hosting provider’s DSA liability exemptions. The court replied: this actually means platform operators get liability shields for everything but GDPR violations. The court's overall tone seemed to be: this is all easy to handle, what's the big deal?
Did the CJEU just upend decades of internet law?
The court seemed unaware it was potentially undoing decades of carefully balanced law created for the online ecosystem. For one, it may be incorrect that it has not imposed "general monitoring" obligations on platforms. There is no readily available means for singling out user-generated content that contains sensitive data, to prevent it from being posted. In order to find such content, platforms may have to monitor all user posts.
Second, the court may have inadvertently imposed ID verification obligations that many privacy advocates abhor, and which governments around the world have routinely rejected as a condition for using online services. The court required platform operators to verify the identity of any user who wishes to post sensitive data. But since there's no reliable way to know which users will and won't post sensitive data, what would platforms potentially need to do? Verify the identity of all users prior to letting them post. Not even the U.K. Online Safety Act requires this; it requires age verification to access adult content, but not identity verification. Russmedia seems to require unmasking the previously anonymous user.
Each of these issues — content scanning and ID verification — are major policy issues that countries have debated for decades. They involve a host of important and competing interests that national policies seek to balance. In the court's decision, however, the GDPR trumps these other interests. German tech publication Heise describes the court as requiring platforms to create a new and separate "clean net" for Europeans. American scholar Daphne Keller, whose work was cited by the CJEU's advocate general in his opinion, posted shortly after the decision, "I doubt (the court) understood the consequences" of its decision.
Looking forward
The major question for the Russmedia decision will be: How broadly will it be applied? Russmedia operated a marketplace, and the CJEU speaks exclusively in terms of the duties of online marketplaces. But its rationale could arguably apply to any online platform that permits user-generated content. This could include reviews platforms, chat forums, comment sections of digital media, or social media networks. A joint declaration from the data protection authorities of Berlin and Hamburg already interprets the decision as applicable not just to marketplaces, but a variety of online platforms and hosting services.
If this occurs, the consequences for free expression could be dramatic. In Europe, persons who are injured by unlawful social media posts can sue — or file criminal charges against — the poster. A budding legal industry has sprouted in some European countries to find posters and sue them on contingency. What if these lawsuits could be filed not just against the posters, but against the social media networks themselves? Or against local digital media sites that don't police their comment sections closely enough?
To illustrate: It's commonplace to see politicians insulted as Nazis — or, on the other end, as Communists. This could arguably suggest a political opinion of the politician, which is sensitive data. European politicians already regularly file complaints against individuals for insults — per German media, Chancellor Friedrich Merz has filed "hundreds" of complaints against his online detractors. What would happen if social media networks believed every "He’s a Nazi" post carries a 7,000 euro price tag, like the Russmedia post did — or a 4%-of-global-turnover GDPR fine? And to avoid that, not only would they have to prevent the posts from going up — but if something slips through — find ways to force third-party sites to take them down? The consequences for free expression and public debate seem major and adverse.
Also, if applied broadly, Russmedia could lead to further tensions in an already volatile trans-Atlantic environment. U.S.-based companies could operate in full compliance with the DSA and Digital Markets Act but still find themselves sued or investigated because they failed to prevent a post that purportedly contains sensitive data. The current U.S. administration already describes EU digital legislation as undermining U.S. interests, particularly speech rights, which have historically enjoyed bipartisan support. During President Barack Obama's first term, the U.S. Congress passed the SPEECH Act to let Americans block foreign defamation judgments due to a surge in libel tourism: the prospect of ruinous foreign libel judgments was seen as suppressing Americans' First Amendment-protected speech at home. How might President Donald Trump's administration respond if EU enforcement starts requiring U.S. platforms to proactively police and stop speech because of its content, or face material liability?
Other potential consequences of Russmedia land it in the EU as it may throw key lawmaking projects into question. If the DSA's longstanding liability exemption for content hosting providers can lose to the GDPR despite years of practice, what does that mean for the rest of the EU's digital legislation, such as the Digital Markets Act, Data Act, AI Act and NIS2 Directive?
Similarly, Russmedia may hamper European competitiveness against U.S. providers. The legal uncertainty and liability risk Russmedia entails is on a scale that only large U.S. platforms may be able to manage. Budding European competitors may lack the financial wherewithal to build ID verification, proactive content filtering and review — or to withstand the risk of litigation if either fail.
Of course, predictions of how court decisions will play out are notoriously difficult to make. Regulators may interpret Russmedia more narrowly than this analysis, or the CJEU may clarify the decision in the future to prevent it from supporting mass litigation. Still, this decision is the clearest statement to date that online services are equally responsible for user content as the users who post it. This potentially upends one of the 20th century's fundamental tenets of the internet — and as such is worth following closely.
Daniel Felz, CIPP/E, is a partner at Alston & Bird.


