In April last year, the European Commission published its ambitious proposal to regulate Artificial Intelligence. The regulation was meant to be the first of its kind, but the progress has been slow so far due to the file's technical, political and juridical complexity.
Meanwhile, the EU lost its first-mover advantage as other jurisdictions like China and Brazil have managed to pass their legislation first. As the proposal is entering a crucial year, it is high time to take stock of the state of play, the ongoing policy discussions, notably around data, and potential implications for businesses.
State of play
For the European Parliament, delays have been mainly due to more than six months of political disputes between lawmakers over who was to take the lead in the file. The result was a co-lead between the centrists and the center-left, sidelining the conservative European People's Party.
Members of European Parliament are now trying to make up for lost time. The first draft of the report is planned for April, with discussions on amendments throughout the summer. The intention is to reach a compromise by September and hold the final vote in November.
The timeline seems particularly ambitious since co-leads involve double the number of people, inevitably slowing down the process. The question will be to what extent the co-rapporteurs will remain aligned on the critical political issues as the center-right will try to lure the liberals into more business-friendly rules.
Meanwhile, the EU Council made some progress on the file, however, limited by its highly technical nature. It is telling that even national governments, which have significantly more resources than MEPs, struggle to understand the new rules' full implications.
Slovenia, which led the diplomatic talks for the second half of 2021, aimed to develop a compromise for 15 articles, but only covered the first seven. With the beginning of the French presidency in January, the file is expected to move faster as Paris aims to provide a full compromise by April.
As the policy discussions made some progress in the EU Council, several sticking points emerged. The very definition of AI systems is problematic, as European governments distinguish them from traditional software programs or statistical methods.
The diplomats also added a new category for "general purpose" AI, such as synthetic data packages or language models. However, there is still no clear understanding of whether the responsibility should be attributed upstream, to the producer, or downstream, to the provider.
The use of real-time biometric recognition systems has primarily monopolized the public debate, as the commission's proposal falls short of a total ban for some crucial exceptions, notably terrorist attacks and kidnapping. In October, lawmakers adopted a resolution pushing for a complete ban, echoing the argument made by civil society that these exceptions provide a dangerous slippery slope.
By contrast, facial recognition technologies are increasingly common in Europe. A majority of member states wants to keep or even expand the exceptions to border control, with Germany so far relatively isolated in calling for a total ban.
"The European Commission did propose a set of criteria for updating the list of high-risk applications. However, it did not provide a justification for the existing list, which might mean that any update might be extremely difficult to justify," Lilian Edwards, a professor at Newcastle University, said.
Put differently, since the reasoning behind the lists of prohibited or high-risk AI uses are largely value-based, they are likely to remain heatedly debated points point through the whole legislative process.
For instance, the Future of Life Institute has been arguing for a broader definition of manipulation, which might profoundly impact the advertising sector and the way online platforms currently operate.
A dividing line that is likely to emerge systematically in the debate is the tension between the innovation needs of the industry, as some member states already stressed, and ensuring consumer protection in the broadest sense, including the use of personal data.
Data protection & AI
This underlying tension is best illustrated in the ongoing discussion for the report of the parliamentary committee on Artificial Intelligence in a Digital Age, which are progressing in parallel to the AI Act.
In his initial draft, conservative MEP Axel Voss attacked the General Data Protection Regulation, presenting AI as part of a technological race where Europe risks becoming China's "economic colony" if it did not relax its privacy rules.
The report faced backlash from left-to-center policymakers, who saw it as an attempt to water down the EU's hard-fought data protection law. For progressive MEPs, data-hungry algorithms fed with vast amounts of personal data might not be desirable, and they draw a parallel with their activism in trying to curb personalized advertising.
"Which algorithms do we train with vast amounts of personal data? Likely those that automatically classify, profile or identify people based on their personal details — often with huge consequences and risks of discrimination or even manipulation. Do we really want to be using those, let alone 'leading' their development?" MEP Kim van Sparrentak said.
However, the need to find a balance with data protection has also been underlined by Bojana Bellamy, president of the Centre for Information Policy Leadership, who notes how some fundamental principles of the GDPR would be in contradiction with the AI regulation.
In particular, a core principle of the GDPR is data minimization, namely that only the personal data strictly needed for completing a specific task is processed and should not be retained for longer than necessary. Conversely, the more AI-powered tools receive data, the more robust and accurate they become, leading (at least in theory) to a fairer and non-biased outcome.
For Bojana, this tension is due to a lack of a holistic strategy in the EU's hectic digital agenda, arguing that policymakers should follow a more result-oriented approach to what they are trying to achieve. These contradicting notions might fall on the industry practitioners, which might be requested to square a fair and unbiased system while also minimizing the amount of personal data collected.
The draft AI law includes a series of obligations for system providers, namely the organizations that make the AI applications available on the market or put them into services. These obligations will need to be operationalized, for instance, what it means to have a "fair" system, to what length should "transparency" go and how is "robustness" defined.
In other words, providers will have to put a system in place to manage risks and ensure compliance with support from their suppliers. For instance, a supplier of training data would need to detail how the data was selected and obtained, how it was categorized and the methodology used to ensure representativeness.
In this regard, the AI Act explicitly refers to harmonized standards that industry practitioners must develop to exchange information to make the process cost-efficient. For example, the Global Digital Foundation, a digital policy network, is already working on an industry coalition to create a relevant framework and toolset to share information consistently across the value chain.
In this context, European businesses fear that if the EU's privacy rules are not effectively incorporated in the international standards, they could be put at a competitive disadvantage. The European Tech Alliance, a coalition of EU-born heavyweights such as Spotify and Zalando, voiced concerns that the initial proposal did not include an assessment for training dataset collected in third countries that might use data collected via practices at odds with the GDPR.
Adopting industry standards creates a presumption of conformity, minimizing the risk and costs for compliance. These incentives are so strong that harmonized standards tend to become universally adopted by industry practitioners, as the cost for departing from them become prohibitive. Academics have defined standardization as the "real rulemaking" of the AI regulation.
"The regulatory approach of the AI Act, i.e. standards compliance, is not a guarantee of low barriers for the SMEs. On the contrary, standards compliance is often perceived by SMEs as a costly exercise due to expensive conformity assessment that needs to be carried out by third parties," Sebastiano Toffaletti, secretary-general of the European DIGITAL SME Alliance, said.
By contrast, European businesses that are not strictly "digital" but that could embed AI-powered tools into their daily operations see the AI Act as a way to bring legal clarity and ensure consumer trust.
"The key question is to understand how can we build a sense of trust as a business and how can we translate it to our customers," Nozha Boujemaa, global vice president for digital ethics and responsible AI at IKEA, said.
Photo by Michael Dziedzic on Unsplash
If you want to comment on this post, you need to login.