Artificial intelligence is rapidly transforming the global economy and society. From accelerating the development of pharmaceuticals to automating factories and farms, many countries are already seeing the benefits of AI.
Unfortunately, it is becoming increasingly clear that the European Union’s data-processing regulations will limit the potential of AI in Europe. Data provides the building blocks for AI, and with serious restrictions on how they use it, European businesses will not be able to use the technology to its full potential.
The EU General Data Protection Regulation, which came into force a year ago, will affect the use of AI in at least three ways: by limiting the collection and use of data, restricting automated decision-making, and increasing compliance costs and risks. Unless the EU reforms the GDPR, Europe will fall behind others, such as the United States and China, in the development and use of AI.
First, the GDPR generally requires organizations to minimize the amount of data they collect and use it only for its original intended purpose. These restrictions, set out in Article 5, significantly limit organizations’ ability to innovate with data. It prevents them from collecting new data before they understand its potential value and from reusing existing data for novel purposes. For most organizations, it is not always possible to know what data is most valuable or will yield the most important insights. Indeed, they often create new value by combining datasets, which makes it difficult to know at the outset what data will have the most value and why. Many machine-learning systems benefit from access to large datasets, and this data allows them to improve their accuracy and efficiency. In this respect, companies subject to the GDPR will face restricted access to data, which will put them at a disadvantage compared to competitors in the United States and China.
Second, the GDPR limits how organizations can use data to make automated decisions. Article 22 of the GDPR establishes that wherever companies use AI to make a significant decision about individuals, such as whether to offer a loan, the data subject has the right to have a human review that decision. This requirement makes it difficult and impractical for companies to use AI to automate many processes because they must develop a redundant manual process for any individuals who opt out of the automated one. Having humans review automated decisions is costly. One of the primary reasons for using AI is to reduce the amount of time humans spend processing large quantities of data, but this effectively requires humans to still be involved. More sophisticated AI systems are more difficult and more expensive for humans to review. As a result, even where algorithms make more accurate predictions, increase transparency and fairness, and are more cost-effective, the GDPR will incentivize companies to use humans to make decisions at the expense of accuracy and the protection of their customers.
The GDPR also limits automated decision making because it requires organizations to explain how an AI system makes decisions that have significant impacts on individuals — a logic even developers cannot always explain even if it leads to more accurate and less biased decisions than those made by humans. As a result, many businesses will not be able to use the most advanced AI systems because they would not be able to comply with this requirement.
Third, the GDPR exposes organizations using AI to substantial compliance costs and risks. The GDPR imposes direct costs, such as obtaining affirmative consent from individuals to process their data and hiring data protection officers. But organizations also face substantial compliance risk because of ambiguous provisions in the law, uncertainty about how these provisions will be interpreted by data protection authorities, and steep fines from regulators for violations — whether intentional or not. Companies will likely err on the side of caution and limit their use of data, even in ways that go beyond the law’s original intent, to avoid future entanglements with regulators. Businesses in countries like the United States or China not subject to the GDPR will be moving ahead quickly without waiting to exit this regulatory limbo.
Unless EU policymakers address these fundamental problems, the GDPR will inhibit the development and use of AI in Europe, putting European firms at risk of a competitive disadvantage in the emerging global algorithmic economy.
Fortunately, there are steps policymakers can take to make targeted reforms without undermining the goals of the regulation. The EU should reform the GDPR for the algorithmic economy by expanding authorized uses of AI in the public interest, allowing the repurposing of data posing minimal risk, removing penalties for automated decision-making, permitting basic explanations of automated decisions, and making fines proportional to harm.
These reforms should happen quickly because time is of the essence; this is a technology where first-mover advantages are critical, just as they were in the 1990s with the rise of the internet. AI is rapidly evolving, and the EU needs to ensure that the GDPR evolves at the same time. Unless EU policymakers amend the GDPR, the EU will not be able to achieve its vision of becoming a global leader in AI.
Daniel Castro is the vice president of the Information Technology and Innovation Foundation and director of the Center for Data Innovation. Eline Chivot is a senior policy analyst for the Center for Data Innovation.