TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Want Europe to have the best AI? Reform the GDPR Related reading: AI offers opportunity to increase privacy for users



Artificial intelligence is rapidly transforming the global economy and society. From accelerating the development of pharmaceuticals to automating factories and farms, many countries are already seeing the benefits of AI.

Unfortunately, it is becoming increasingly clear that the European Union’s data-processing regulations will limit the potential of AI in Europe. Data provides the building blocks for AI, and with serious restrictions on how they use it, European businesses will not be able to use the technology to its full potential.

The EU General Data Protection Regulation, which came into force a year ago, will affect the use of AI in at least three ways: by limiting the collection and use of data, restricting automated decision-making, and increasing compliance costs and risks. Unless the EU reforms the GDPR, Europe will fall behind others, such as the United States and China, in the development and use of AI.

First, the GDPR generally requires organizations to minimize the amount of data they collect and use it only for its original intended purpose. These restrictions, set out in Article 5, significantly limit organizations’ ability to innovate with data. It prevents them from collecting new data before they understand its potential value and from reusing existing data for novel purposes. For most organizations, it is not always possible to know what data is most valuable or will yield the most important insights. Indeed, they often create new value by combining datasets, which makes it difficult to know at the outset what data will have the most value and why. Many machine-learning systems benefit from access to large datasets, and this data allows them to improve their accuracy and efficiency. In this respect, companies subject to the GDPR will face restricted access to data, which will put them at a disadvantage compared to competitors in the United States and China. 

Second, the GDPR limits how organizations can use data to make automated decisions. Article 22 of the GDPR establishes that wherever companies use AI to make a significant decision about individuals, such as whether to offer a loan, the data subject has the right to have a human review that decision. This requirement makes it difficult and impractical for companies to use AI to automate many processes because they must develop a redundant manual process for any individuals who opt out of the automated one. Having humans review automated decisions is costly. One of the primary reasons for using AI is to reduce the amount of time humans spend processing large quantities of data, but this effectively requires humans to still be involved. More sophisticated AI systems are more difficult and more expensive for humans to review. As a result, even where algorithms make more accurate predictions, increase transparency and fairness, and are more cost-effective, the GDPR will incentivize companies to use humans to make decisions at the expense of accuracy and the protection of their customers. 

The GDPR also limits automated decision making because it requires organizations to explain how an AI system makes decisions that have significant impacts on individuals — a logic even developers cannot always explain even if it leads to more accurate and less biased decisions than those made by humans. As a result, many businesses will not be able to use the most advanced AI systems because they would not be able to comply with this requirement.

Third, the GDPR exposes organizations using AI to substantial compliance costs and risks. The GDPR imposes direct costs, such as obtaining affirmative consent from individuals to process their data and hiring data protection officers. But organizations also face substantial compliance risk because of ambiguous provisions in the law, uncertainty about how these provisions will be interpreted by data protection authorities, and steep fines from regulators for violations — whether intentional or not. Companies will likely err on the side of caution and limit their use of data, even in ways that go beyond the law’s original intent, to avoid future entanglements with regulators. Businesses in countries like the United States or China not subject to the GDPR will be moving ahead quickly without waiting to exit this regulatory limbo.

Unless EU policymakers address these fundamental problems, the GDPR will inhibit the development and use of AI in Europe, putting European firms at risk of a competitive disadvantage in the emerging global algorithmic economy.

Fortunately, there are steps policymakers can take to make targeted reforms without undermining the goals of the regulation. The EU should reform the GDPR for the algorithmic economy by expanding authorized uses of AI in the public interest, allowing the repurposing of data posing minimal risk, removing penalties for automated decision-making, permitting basic explanations of automated decisions, and making fines proportional to harm. 

These reforms should happen quickly because time is of the essence; this is a technology where first-mover advantages are critical, just as they were in the 1990s with the rise of the internet. AI is rapidly evolving, and the EU needs to ensure that the GDPR evolves at the same time. Unless EU policymakers amend the GDPR, the EU will not be able to achieve its vision of becoming a global leader in AI.

Daniel Castro is the vice president of the Information Technology and Innovation Foundation and director of the Center for Data Innovation. Eline Chivot is a senior policy analyst for the Center for Data Innovation.

Photo by Franck V. on Unsplash


If you want to comment on this post, you need to login.

  • comment Jeroen Terstegge • Jun 1, 2019
    This article is wrong in so many ways.
    1. It reads like a lobby article for tech industry that wants more lax data protection regulations in Europe. Oh wait... it is!
    2. Because of its intention, the article highlights elements of the GDPR and places them in the wrong context to scare readers.
    - First of all, art. 22 doesn’t offer and opt-out and require a manual process. It only requires safeguards, including human intervention, which means that if a data subject doesn’t agree with the decision, a human should review it. It is a fairness review, and for good reasons, as bias the biggest problem of AI-based decision-making. Provided the algorithm has no bias, the fairness review means that in most cases, that human will likely uphold the decision.
    - Secondly, I agree to some extent with the explanation part, but that is a challenge we will have to overcome, since explaining our decisions is what the rule of law in society where people interact with each other is all about. Just like to don’t want to be sent to jail without an explanation, you don’t want your bank to deny you a mortgage without an explanation. Explaining the decision is also in the bank’s interest. If the mortgage application process is seen as a lottery, customers will seek their mortgages elsewhere.
    - Thirdly, consent is almost never necessary. Consent is the fall-back position if nothing else works, not the primary basis for data processing.
    - Fourth, training AI with data is covered by the research article (art. 89), subject to suitable safeguards that is.
    3. But most important of all, the GDPR does not block the uptake of AI at all. Art. 22.2.b leaves it to EU and Member State law to regulate AI, the only requirement for such laws being that such laws promote responsible AI, not irresponsible AI coming from countries with no or less strict data protection laws. After all, the EU is a values-based community, and the GDPR contributes to upholding such values in the information economy (see recitals 2, 4, 6 and 7).
    Don’t read more in the GDPR than necessary. It is not an almighty law for the information society. The GDPR is not intended to regulate business processes, only to regulate information processes. Other laws, like civil codes, labour laws, competition laws, consumer protections laws, etc regulate business processes. In the absence of such laws, we have the GDPR to help us design fair business processes. And if that proves to be difficult with only the GDPR in hand, all the GDPR nudges us to do, is to get together and draft a law that works for our business process and our society. Our next challenge in Europe is to design laws for the fair use of AI. Not a single syllable of art. 22 GDPR needs to be changed to make that happen.
  • comment Daniele Santangelo • Jun 1, 2019
    I would like to share my thoughts on the subject too.
    The GDPR aims at reducing the limitless harvesting of personal data which, in turn, exposes the collected data to potentially unauthorized use, let alone potential harm, disclosure, leak or theft.
    Reviewing collecting and processing means that organizations will be able to greatly improve their own procedures of selecting and combining data, and therefore will better identify the most meaningful to collect and use. By simply complying with the GDPR, they could rightfully collect data and know the data WILL be useful to their present and future needs. Do I need to mention the many analysis that confirm how implementing the GDPR measures is cost effective, mid-long term, for the organization themselves (among others by reducing data breaches and the connected costs, monetary and reputational)?
    Then you take into account that some people MIGHT opt-out, without providing any evidence that their number will effectively affect the general cost/benefit of the whole decision process. Plus, according to the GDPR, you can have a complete or a partial opt-out. In the first scenario (22.1) I can't see any reasons it should be different, considering that the interest in processing the data is totally pending against the data subject. In the second scenario (22.2), the partial opt-out allows the automated decision to still happen as intended, with the human intervention needed just as a final review, if and when requested. And in this regard I will just say one word: accountability; i.e. if the process and the logics behind it are well documented and tracked, a human review won't be so difficult nor will be too costly to implement. We all know how AI decision will always imply a partially "black box" mechanism, but a balance is needed to protect data subject and trade-offs are still possible avoid cost/benefit losses. I won't comment on the statement "a logic even developers cannot always explain even if it leads to more accurate and less biased decisions than those made by humans" because it's too evident how it is fallacious (if the developers cannot explain the logic, how can you assert the final decision would be more accurate and less biased? A leap of faith?). By the way, you should not forget CJEU Case C-131/12 - Google Spain vs Mario Costeja González.
    Then you state "gaining affirmative consent from individuals to process their data" is a cost: well, it is indeed a cost but I want to think you've just thrown a sentence there rather than a real belief, because one would seriously question your ethics otherwise...
    And as Jeroen Terstegge already pointed out, the consent is just a fall-back basis and not a primary one.
    A word on the U.S. now (and no, I won't even talk about China, because seriously, I respect them a lot but won't you really take them as examples, will ya?). Have you ever heard about the CCPA?. Well, it seems way more restrictive than the GDPR when it comes to the actual use of personal data other than the original reason. And the CCPA is not the only one that was passed or is being discussed among the States. Are we sure they won't come up with regulations which, in the end, will put in place more limits than the GDPR? Or maybe even conflicting with each other, thus exasperating the situation? We can't say for sure, but assuming it won't be this way just seems not fair in the overall discussion.
    Finally, a word on the article as a whole: your analysis seems to point at first in the direction of an inflexible legislation, which in turn poses a limit to companies wishing to develop AI; then the same analysis steers towards GDPR's uncertainty, stating that it limits... companies wishing to develop AI! You should take a position: is the GDPR too prescriptive or is it too general? Or isn't it that you are simply trying to discredit the GDPR no matter what?
    In my opinion the vision of the GDPR is clear: it doesn't restrict the use of data, it just restricts its unlawful use.
    The GDPR is a cornerstone albeit it shows great flexibility because of its "technology neutral" nature. It will shape new applications, and at the same time it will be shaped by new interpretations of technology made by the jurisprudence.
  • comment Daniel Castro • Jun 2, 2019
    Jeroen, you mistakenly believe that data rules only affect the tech industry. This is completely wrong. The GDPR, and most other data protection rules, have a huge impact on most sectors of the economy. And that is why it is so important to get them right for the emerging AI economy—getting them wrong, means that European businesses will be significantly less competitive than their peers abroad.
    Let me respond to your main two objections. You say that article 22 won’t have a serious impact because a human will most likely uphold the decision of the algorithm. But that misses the point. Indeed, a human may always agree with the algorithms even after a thorough review—but the cost of having to do this review is what is the problem. And the GDPR does not allow companies to simply affirm that the algorithm is working correctly—they must in fact go through a full manual review every time it is required. So companies must be prepared to engage in this review, which means they cannot automate many processes where this type of review would be infeasible. We have already seen companies start to weaponize the GDPR (e.g. which encourages users to waste the time and money of businesses by overloading them with GDPR data access requests). Second, you say that market forces will affect 
    You also say that GDPR doesn’t matter for AI because member states can regulate that component (pointing to Article 22.2.b). But that’s exactly my point. The GDPR was supposed to create a single digital market across the EU—that was one of the primary reasons for updating the law. And even now, not all countries have implemented the law. But Europe will not have a DSM for AI if each country has to implement its own exceptions. That needs to be built into any data protection rules, not put on as an afterthought. Don’t privacy experts always argue for “privacy by design”? The same principle applies to innovation and laws—and the GDPR is clearly not set for “innovation by design” when it comes to AI. And since there are no EU-w-de exceptions, it is left to the individual member states.
    Finally, consent is a primary feature of GDPR. Indeed, much focus by many companies right now on compliance is around ensuring they have appropriate consent. The GDPR puts so much focus on compliance and data management for all data, that it forces companies to treat all personal data the same, instead of prioritizing protections for the most sensitive data and the most sensitive applications. And that is why even a year after the GDPR, consumer trust has not changed in Europe, and is no different than in the United States. If the purpose of the GDPR was to restore trust, it has largely failed. It would be better if defenders of the GDPR stopped treating the law like stone tablets passed down by the hand of God, and more like a working draft that should evolve and improved over time with new evidence and technologies.
  • comment Jeroen Terstegge • Jun 3, 2019
    Daniel, I don’t say the GDPR is perfect :) It has a few flaws. However, not in art. 22. The biggest flaw of the GDPR is its application by newbie ‘experts’ who have jumped on it as a means to expand business. And because clients have no clue either because of the necessary vague language used, One-eye is King and we see the GDPR applied in all the wrong ways. One cannot apply the GDPR without a full understanding of its history and its place in the EU legal framework (nor criticize it for that matter). The GDPR is part of a much larger legislative package to modernize EU law to keep up with the information society. Unlike you suggest, the GDPR is not meant to create a digital single market, it is only part of the effort. The second flaw of the GDPR is that is has entered into application way before the rest of the Digital Market package was finished (if it ever will be). So, we are still missing some of the EU legislation which the GDPR envisions to exist, like EU legislation for fully automated decision-making. However, there are limits to what the EU can do here. There are a few areas where the EU is not (solely) competent to legislate, but where member states are in the lead (e.g., all areas mentioned in Chapter IX of the GDPR, like employment, as well as public sector governance). As AI will be used across a variety of sectors, there will be limits to what Brussels can do to harmonize the uptake of AI. After all, the EU is not a federal state. Art. 22.2.b leaves it to “Union or Member State” law to regulate AI, where most logically the EU will be in the lead with regard to the use of AI in consumer and B2B products and services and healthcare, and Member States will be in the lead with regard to the use of AI in employment situations and the public sector. Sure, the pace in which law is developed will always be slower than the pace in which technology and business models are developed. But already, we see MP’s in Member States calling for legislation for AI in specific sectors. And that is exactly what the GDPR wants the legislature to do. Most likely, such laws will not be omnibus laws, but very sector-specific to address the specific risks of AI in those sectors. As long as laws for consumer and B2B products and services will be enacted as EU laws, the EU Digital Single Market will not be compromised. As for art. 22.2.a (contract with safeguards including human intervention), the GDPR indirectly puts ball in the court of the legislator again, this time consumer protection law. After all, any EU sector-specific law based on art. 22.2.b would override art. 22.2.a as a lex specialis (just like the ePR will override the GDPR for matters regulated by the ePR). So, in theory, the GDPR works. Overly strict ‘experts’ and even more so their clients will have to do better to understand the GDPR. And the legislator will have to step up its work to ensure AI is rolled out responsibly in the EU.
  • comment Zsolt Bartfai • Jun 6, 2019
    Congratulations! Finally I am not alone saying that GDPR kills the innovation (and it has many other serious deficiencies). - cf.