On April 21, 2021, the European Commission published its bold and comprehensive proposals for the regulation of artificial intelligence.

With suggested fines of up to 6% of annual global turnover, as well as new rules and prohibitions governing high-risk AI systems, the announcement has already generated much interest, with speculation about how it will impact both the technology companies that develop AI systems and the industries that utilize them.

Due to the critical role that data plays in the development of machine learning technologies, the regulation is of real consequence to the privacy profession particularly. While consideration of the full range of implications of the draft text is only just beginning, there are some important initial conclusions that can be drawn.

New framework for the future

What is immediately apparent is this proposal is a groundbreaking attempt at regulating the future of our digital and physical worlds. With its announcement this week, the commission has put forward an entirely new body of law, which intends to place ethical issues such as bias mitigation, algorithmic transparency and human oversight of automated machines on a statutory footing. The framework, therefore, promises to have the same profound impact on the use of AI as the EU General Data Protection Regulation has had on personal data.

Data is at the heart of the regulation

Data governance forms an integral part of the obligations that are intended to apply to providers of high-risk AI systems. The regulation requires providers to employ a range of techniques to datasets that are used in the training, validation and testing of machine learning and similar technologies. This includes identifying potential biases, checking for inaccuracies and assessing the suitability of the data.

The suggested maximum fines that can be imposed under the regulation, of up to 6% of annual global turnover, are only intended to apply in a limited range of circumstances. Indicatively, these circumstances include a breach by a provider of the data governance requirements, demonstrating the importance attributed to this issue by the commission. 

Strong governance and risk management is key

While the GDPR’s introduction of the principle of accountability was a significant step-change in privacy law, requiring organizations to put in place practical measures to demonstrate compliance, the AI regulation is even more ambitious. Providers of high-risk AI systems are expected to implement comprehensive governance and risk management controls. This includes the need to create a strategy for regulatory compliance, procedures and techniques for the design, development of the AI system, and a process for evaluating and mitigating the risks that may arise throughout its entire lifecycle. Conformity assessments will also need to be undertaken to demonstrate adherence to the regulation’s requirements.

Uncertainty over ‘user’ obligations

Organizations that procure high-risk AI systems from third-party vendors are also subject to new rules. These rules are underpinned by the expectation that the user adheres to and monitors operational performance in accordance with a set of technical instructions that will be developed by the provider.

However, given it appears the content of these instructions will be determined on a case-by-case basis and are not clearly specified under the regulation, this has the potential to create significant uncertainty for users as to the nature of their compliance obligations.

Light-touch approach to lower risk AI

For AI systems that are neither prohibited or deemed to be high-risk, the commission has taken a more pragmatic and light-touch approach. Providers will be expected to inform individuals when they are interacting with AI systems unless it is obvious. However, neither they nor the users will be expected to provide detailed explanations about the nature of the algorithms or how they operate.

Wide extraterritorial scope

Concern is likely to arise in connection with the wide extraterritorial scope that the regulation seeks to apply. Providers based in third countries, such as the United States, will be subject to the regulation’s requirements if they make their AI system available in the EU. Similarly, and perhaps more significantly, the law will also apply to both providers and users of AI systems where the "output" of that system is used in the EU. This condition has the potential to catch a significant number of additional organizations that have no commercial presence in Europe.

Absence of a one-stop-shop mechanism

Interestingly, while there are many parallels that can be drawn between the GDPR and this proposal, the commission has chosen not to include a one-stop-shop mechanism. The mechanism could have potentially allowed for a single lead authority to oversee compliance of organizations that operate across multiple member states. Instead, the regulation envisages that one or more national authorities will be appointed in each country with powers of enforcement.

This approach could result in the fragmentation of supervision of AI systems that are marketed and used on a cross-border basis. It will, therefore, be interesting to see whether further clarifications will be provided on the mechanisms that will ensure appropriate cooperation and consistency between national authorities, beyond the establishment of a new European Artificial Intelligence Board.

Importance of harm

Harm, or more specifically, the prevention of harm to individuals, is the key objective which underpins the regulation. The commission considers that harm may arise both physically, through AI systems being unsafe and in relation to the risks caused to individuals’ fundamental rights, such as privacy and the right to non-discrimination.

This principle can be seen as the basis for the commission’s rationale as to why certain types of AI have been identified as being high-risk and, therefore, subject to new rules and prohibitions.

This is just the beginning

This week’s announcement by the commission is just the beginning of a vital debate that needs to take place between policymakers, governments and industry about how AI should be regulated in the future. The next step is for the proposal to be reviewed and debated by the European Council and Parliament.

AI governance should become a priority

In the meantime, it is important that organizations that develop or utilize AI consider the strength of their existing governance mechanisms. AI is becoming an increasingly important topic of interest to regulators, not only in the EU, but also across many other major economies, including the U.S. and U.K.

Organizations should consider whether they are currently taking appropriate steps to manage the risks of bias, inaccuracies and other forms of harm in their AI systems and ensure they have adequate controls in place to comply with existing regulations, including privacy, consumer and anti-discrimination legislation.

Photo by Adi Goldstein on Unsplash