TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| Will the EU AI Act work? Lessons learned from past legislative initiatives, future challenges Related reading: Corrected text of EU AI Act released

rss_feed

The EU Artificial Intelligence Act is not yet published in the Official Journal of the European Union, yet its unquestionably disruptive value as the world's first legislative effort to regulate AI is sparking a lot of debate around upcoming implementation and possible problems that may emerge.

When creating the legislative framework, without a precedent to refer to, the European Commission chose the product safety framework, making it part of the so-called harmonization legislation regulating products circulating in the single market.

The text has evolved much since, making the AI Act a rather hybrid form of legislation. While there are criticisms from stakeholders and experts, the chosen structure for regulating AI in Europe not only has its own merits but also constitutes, in its final form, a novel approach at the intersection between technical product safety legislation and legislation intended to protect fundamental rights.

We do not know yet whether this pioneer law will be effective. That will depend on several factors. However, if it is not, it will likely be for other reasons than the ones currently under debate.    

AI is not a toaster: A twofold criticism

The main criticism directed at the AI Act can be split in two different directions. Both consider that something as powerful and complex as AI cannot be regulated by simple technical requirements.

On the one hand, this criticism comes from stakeholders revolving around the protection of personal data and fundamental rights. For them, merely adding fundamental rights — democracy and the environment — to the protected interests of classic product safety legislation, such as health and safety, is already a contradiction, as one cannot treat a system deciding whether or not a person is entitled refugee status the same way as a cold piece of machinery.

In its final form, the AI Act provides much more prominent space to safeguards on fundamental rights. Apart from increasing the number of prohibited practices and restricting those with the highest potential for abuse, it establishes AI-specific rights and remedies — in particular, the right to obtain a meaningful explanation on a decision made by an AI system. Moreover, it creates a new obligation for deployers to perform a fundamental rights impact assessment before using a high-risk AI system to take the specific context into better account.

The rights and remedies were not included in the Commission's draft proposal, as it considered that adequate remedies are normally provided for by existing consumer protection legislation and did not belong to technical harmonization legislation. The fundamental rights impact assessment deviates from the classic "cascade" of operator obligations normally present in product safety legislation, with the manufacturer — in this case, the provider — holding most of the responsibility, then, with increasingly less obligations the importer, the distributor and, finally, the user — in this case, deployer — having virtually none.

AI is not a toaster or a washing machine, so in high-risk use cases the deployer needs to take extra care in using it, particularly when it can negatively impact the fundamental rights of affected persons. That is why, also upon Parliament's insistence, the fundamental rights impact assessment was introduced, thus extending the already existing obligations for AI system deployers in the text. 

The other angle considers that AI is not a static product, with a linear value chain and development, and its regulation cannot be treated the same way as that of a toy or a lift, as it would need much more flexibility and continuous adaptation. Making the AI Act future-proof has been one of the key challenges from the onset of negotiations: how can the slow rhythm of the ordinary legislative procedure keep up with the pace of lightning-fast developments on the market?

Throughout the process, new developments forced us to rethink some concepts and add or scrape out some parts more than once. Critics fail to appreciate the level of flexibility the chosen framework provides for, with the possibility to update most of the annexes, the threshold and other criteria for general-purpose AI models with systemic risk. And, most of all, the high reliance on standards and guidelines.

The AI Act, as for all other harmonized products, foresees a high-level series of technical requirements for a high-risk AI system to circulate in the EU market. The "how" to implement them is largely left to the standardization process, therefore, it is mostly industry-led and more likely to have first-hand experience on how to operationalize requirements through what then become established practices on the market.

Could we have been more detailed and less vague in the articles? Being overly prescriptive would have run the risk of precisely making the legislation too rigid and prone to fast obsolescence, also leaving no room for interpretation in court. The many guidelines foreseen in the act will provide the additional support needed to implement the hard-law requirements and the standards.

Could we have introduced even more flexibility? Many stakeholders have advocated for more — both from industry, like the possibility to update certain definitions, and from civil society, such as possibly updating the list of prohibitions through secondary legislation.

We cannot forget that negotiators were constrained by the boundaries of EU law, which considers that necessary elements of a legislation, like definitions or prohibitions, to go far beyond the normal delegation of power the European Commission would need updated, thus undermining the democratic process. If targeted modifications become necessary, they could very well be the object of a quick legislative procedure, without having to undergo another two years of negotiations.

Does the AI Act run the same risk as the GDPR?

Another key element makes the use of a product safety framework much less prone to problems in the implementation phase: its enforcement structure. The AI Act, just as other pieces of harmonization legislation, relies on national market surveillance authorities for enforcement, whose powers and EU coordination rules are governed by Regulation (EU) 2019/1020 on market surveillance and compliance of products. A framework that works quite well in practice.

In comparing the AI Act and the EU General Data Protection Regulation, this enforcement structure provides for a fundamental difference between the two. Under the AI Act, market surveillance authorities intervene where the infringement took place — for example, in the member state where a noncompliant product is circulating — not where the provider is established and therefore processes the data, as is the case with GDPR.

There are very clear problems in the implementation of the latter, with the notorious bottleneck in Ireland where the majority of non-EU tech companies are established, and which will only partially be fixed by the recently approved revision on the enforcement of cross-border cases. However, in the case of the AI Act, the risk of one authority being overwhelmed by cases and carrying the burden of a large share of the EU enforcement activities is much less present and likely more distributed among member states.

Moreover, the AI Act reinforces the possibility to conduct joint investigations and other activities, as well as mutual assistance as already foreseen by the Market Surveillance Regulation. The main possible concern in using this enforcement structure lies in the skills of the market surveillance authority's staff, as the nature of AI makes its classic technical skills insufficient to assess high-risk AI systems.

The final choice in what authority to appoint is left to member states, with the sole indication its personnel should have sufficient expertise in fundamental rights law, personal data protection and others. Ideally there should be at least a coordinating/supervisory authority (possibly independent, as advocated by Parliament) merging the expertise of the national data protection authority with that of a classic, technical market surveillance authority, as the expertise of neither of the two can function without the other in the case of AI.  

A comprehensive framework to build on

Considering these aspects, the mainstream criticism over the current structure of the AI Act does not seem justified, and actually, it appears the positive aspects outweigh the negative.

However, if there is a major flaw that could render most of the regulation impossible to implement it is the emphasis on the intended purpose of a narrow high-risk AI system, as opposed to a general-purpose one. Since the risk (and triggers of liability of the provider) is related to the specific use-case, and virtually no system has just one, narrow use case, much of the responsibilities will be left to deployers if they don't follow instructions to the letter, and it will be very easy for providers to claim their system's intended purpose is one that is not listed in the regulation.

The emphasis on the intended purpose is a flaw that derives itself from the product safety structure but is not well-suited for complex technologies such as AI, especially not under current developments in the field.

At the same time, one can wonder whether it would not have been better to address overall internal practices of companies and public authorities for compliance, instead of companies and authorities working separately on each single high-risk AI system placed on the market or used.

We will only be able to verify if the system works in practice two years from now. In the meantime, what counts is that a comprehensive framework to harness AI and limit its risks is there and is, overall, considered a good basis by most stakeholders. A basis which will hopefully be easier to build on in the future.

EU AI Act: 101

This chart provides an overview of the EU AI Act, which lays down down a comprehensive legal framework for the development, marketing and use of AI in the EU in conformity with EU values.

View Here


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.