It is possible to rein in Big Tech, but it will require us to get off our high horse on the superiority of EU legislation implementing human rights and European values.
There is extensive discussion in the media about whether the EU can win the battle to enforce its digital legislation against American Big Tech. The White House supports Big Tech by threatening additional tariffs if the EU does not stop fining tech companies for violating European digital regulations, including the Digital Markets Act. Earlier, U.S. Vice President JD Vance warned the U.S. could reconsider support for NATO if the EU continues to enforce against American social media.
The European Commission is now caught between a rock and a hard place. If it backs down on enforcement, it loses any meaningful regulatory leverage. This allows Big Tech to continue to operate in a manner that was the reason for enacting this legislation in the first place. If the Commission holds, it will aggravate the trade war.
If the options are stated as binary as this, the Commission is involved in a zero-sum game, in which you either win or lose. Going along means EU legislation becomes just a bargaining chip, to be traded away against more urgent interests.
I think it is possible to reign in big tech and at the same time avoid aggravating the trade war. This requires acknowledging that big tech has a point when it labels the USD30 billion in fines imposed as "tariffs." This is because the EU and the U.S. have a diametrically opposite way of regulating. This has been creating frictions since the beginning of the digital era, dating back to well before the Trump administrations. Once you understand the differences, you also see why Europe's enforcement is counterproductive.
Rights-based vs. harm-based
European legislation is based on fundamental rights and rooted in the precautionary principle: we try to prevent harm. In the U.S., regulation is harm-based, the market is left to its own devices, and the legislature only intervenes when new services cause unacceptable harm to consumers or businesses — that is, after damage has occurred.
Data protection is a simple example. Under the EU General Data Protection Regulation, it is a fundamental right of individuals and a key principle that personal data must be adequately secured to prevent damage when data is lost. In the U.S., there is no such general obligation to secure personal data. In 2007, when some massive data security breaches made American headlines, about all states introduced a legal obligation to report data breaches to victims.
As a European, I found it difficult to accept that American law did not have a general data security obligation. By now I have moved to thinking, maybe the American system is not all-encompassing, but often very effective. In my experience, the security of companies in the U.S. is on average better than in the EU. My point: there is no right or wrong, the systems are just different.
Process rules
If legislators want to prevent harm from new technologies, they must first predict the harmful impact these technologies may have on individuals and society. Because this is notoriously difficult, the EU is introducing so-called risk-based legislation that obliges companies to regulate themselves through adequate risk management. This is reflected in the GDPR requirement to carry out a data protection impact assessment in the event of high-risk processing. In a similar vein, the EU Artificial Intelligence Act requires a conformity assessment to be carried out in the event of high-risk AI systems.
Though all valid legislation when you want to prevent harm, it is also very process driven. The rules prescribe how companies must organize themselves. American companies experience this legislation as extremely patronizing, because they prescribe how they should organize their risk management, instead of being punished if they do not do their job properly. Resistance to this type of regulation is also growing in the EU, because companies perceive it as an excessive administrative burden and therefore as "over-regulation."
Where there are many process rules, you also get enforcement of violation of process rules. For supervisory authorities, it is easier to fine a company for not having a process document — such as a DPIA — than it is to prove that a practice itself is harmful.
Where the EU has few tech companies of its own, most enforcement is against big tech and for lack of process documents. Big Tech experiences this as imposing a tax on U.S. companies doing business in the EU, hence the labelling as a "tariff."
Case in point is the 290 million euro fine imposed on Uber by the Netherlanddata protection authority, Autoriteit Persoonsgegevens, for the lack of having the EU standard contractual clauses in place for data transfers to the U.S., while at the time of fining, both data exporter and data importer where directly subject to the GDPR, the contract was no longer mandatory and there had been no impact on the privacy of citizens whatsoever. It would have been much more relevant if the DPA issued a decision on the merits, for example, whether Uber complies with data minimization requirements when sending all EU data to the U.S. in the first place.
Moral superiority
Europe is proud of its legislation, which often has effect far outside its territory — coined the Brussels Effect. The mantra is that our laws are embodying fundamental human rights and values. This seemed to be true for the AI Act, when an executive order by then-U.S. President Joe Biden regulated AI on similar principles. When President Donald Trump annulled this executive order when he took office, it was condemned in the EU as a threat to EU fundamental rights and values, but is that really the case?
The fact is AI is already regulated in the U.S. by existing legislation, such as laws on unfair commercial practices, data protection, discrimination and consumer credit. American supervisory authoritiesalready have the authority and use it to tackle AI if it gives discriminatory results, or if algorithms are trained on unlawfully collected data. While the EU was still debating the AI Act, U.S. regulatorswere already issuing orders to destroy unfair algorithms and the data used for their training. The U.S. Federal Trade Commission even started an enforcement sweep "Operation AI Comply."
Telling the U.S. that it does not have laws that implement fundamental rights and values is therefore not correct. The underlying values are essentially the same: no cheating, misleading or discrimination, no harm and no unfair competition. It is the way in which we regulate that differs.
Does this justify our moral superiority? Not as far as I'm concerned. So yes, I understand where Big Tech is coming from. At the same time, they try to use their understandable irritations to evade underlying material rules that really matter, and this is happening on both sides of the Atlantic.
Public opinion
In terms of enforcement against Big Tech, this means that we have to selectively pick our fights on topics which big tech will lose in the court of public opinion, also in the U.S. Remember, these days, politics are conducted as much via social media as by democratic institutions.
This means that we must focus on practices that are unfair, harmful or discriminatory. These practices are also prohibited and enforced in the U.S.
In May, for example, Google agreed to pay USD1.4 billion in a settlement with the state of Texas over allegations it violated user privacy. Google allegedly continued to collect users' location and facial recognition data without consent. Previously, Google settled for USD391.5 million with 40 other U.S. states.
In the EU, too, public opinion is needed to mobilize our governments, businesses and citizens to vote with their feet. By switching to alternative digital platforms that comply with European rules — not only as a user, but also through advertising budgets. This hits where it hurts: in terms of numbers of users and advertising revenue. Therefore, we need to focus on stopping dark patterns that encourage users to give unwanted consent and addictive design techniques to keep visitors glued for longer.
Manipulation of algorithms
The real power of Elon Musk and Mark Zuckerberg further lies in their ability to configure their recommender systems directing content to specific groups of users. A bizarre example to illustrate: when an X (formerly tweet) of Joe Biden attracted more attention than an X of Elon Musk, he rallied a team of roughly 80 engineers to reconfigure X's algorithm so his tweets would be more widely viewed than Biden's. This manipulation of X's algorithm also pushed his endorsements of German far-right party Alternative for Deutschland into millions of people's feeds right before the German elections.
There was also great outrage in the EU about Zuckerberg's announcement to abolish Facebook's fact-checkers for content moderation. This is a total red herring. Content moderation is at the end of the pipeline. The real concern is in Zuckerberg's announcement that Meta will dial back on censorship of political content. This means the recommender system will unnaturally amplify far-right content as it triggers outrage and gets more attention — eyeballs.
Meta knows very well that the real issue is the unnatural amplification by its recommender system and not the content moderation. Its leaked internal research is clear: "Our algorithms exploit the human brain's attraction to divisiveness."
"We are never going to remove everything harmful from a communications medium used by so many, but we can at least do the best we can to stop magnifying harmful content by giving it unnatural distribution."
Coordinated action
Enforcing against these harmful practices is quite possible under current EU law. This requires close cooperation between relevant supervisory authorities tasked with enforcement of data protection, AI, consumer protection and competition, as well as the establishment of joint technical teams that can properly understand and assess algorithms.
No country in the EU can do this alone. It requires coordination — not only within the EU, but preferably together with Australia, Canada and the U.K.
My appeal to the European Parliament is to stop submitting proposals for new regulations to curb Big Tech. We have all the laws we need, just like the U.S.
My appeal to all European supervisory authorities is to stop enforcing process rules. This not only causes irritation for Big Tech, but also for our own companies. We must use the freed-up capacity to combat unfair practices by Big Tech, such as those that are also prohibited in the U.S. and where we will win in the court of public opinion.
I call upon the EU media to widely cover the unfair practices and flag equivalent alternatives for EU compliant digital services.
And I call upon citizens and organizations to vote with their feet by moving to other digital platforms and redirecting advertising budgets.
Lokke Moerel is Professor Global ICT Law at Tilburg University. This op-ed reflects the personal opinion of the author only.