The first week of November 2023 may well be remembered as a tipping point for the global regulatory response to artificial intelligence following a slew of notable legislative and policy developments.
In the U.S., the Biden administration released a sweeping executive order, which led to a cascade of additional announcements, including draft policy guidance from the U.S. Office of Management and Budget. The G7 released a code of conduct for AI, and the U.K. held a highly publicized conference on AI safety with world leaders, industry executives and civil society.
Amid these global developments, Brussels remains near the center of AI regulatory universe as EU institutions navigate their trilogue negotiations on the proposed AI Act. Though policymakers have been working hard to advance the comprehensive legislation, the clock is ticking ahead of European Parliament's elections in June 2024 and a new European Commission.
Coming off the last set of trilogue negotiations 27 Oct., Kai Zenner, Head of Office and Digital Policy Advisor to German Member of European Parliament Axel Voss and who is immersed in the AI Act trilogue negotiations, said he's not yet able to say when a final text will see the light of day. "All those people who are sitting in the negotiations feels that it is very close to finding a compromise," he said, "but we are still rather far away" on other sticking points, like enforcement.
Zenner made the comments last week during a panel session at the IAPP AI Governance Global in Boston.
He said there's only a 50-50 chance the text will be finalized before next year's elections because many details remain. There are more than 100 lines of text that are not yet agreed upon, with a range of issues — from major provisions related to law enforcement's use of AI to smaller, more technical issues that were "parked" in order to move the larger negotiations forward — which will take time to address.
"Last week, we had a very long meeting," Zenner said of the 27 Oct. trilogue negotiations, which lasted about 18 hours. "We made progress," he said, including around Article 6, related to the justification of high-risk AI systems. Policymakers added criteria that will allow companies "to opt out, or justify" that they are not a high-risk system, including in the employment space. He said there was also agreement on foundation models, which he described as a "two-tiered approach," with "very basic obligations."
Though negotiations are progressing, Zenner underscored a tight deadline for getting legislation across the finish line in the coming weeks. "There is a huge list of open points that we need to close very soon." The next trilogue meeting is set for 6 Dec., which, Zenner suggested, is a significant deadline. "Because of the European elections (in 2024), there would not be enough time for translations, for the lawyer linguists to go through (in time) for the vote," he said.
The points of contention are wide ranging. For example, dialogue over the the potential AI enforcement body is "currently parked," Zenner said. Every reference to "legally protected interests," such as heath, safety, fundamental rights and democracy, have been parked as well. Zenner said there is "strong opposition to enhance the scope of democracy, rule of law and the environment" in this context. For example, he said, how can an organization truly conduct a risk assessment on an AI system to determine if it poses a risk to democracy?
"I will say that is rather difficult," he said.
For banned use cases like social scoring and real-time remote biometric identification, there are "gray" areas "where it's not immediately clear if it's really something so risky." Zenner indicated credit scoring would currently fall under social scoring.
"So we need to make (the final text) extremely clear in order to not have side effects that we would not have wanted," Zenner said.
Another sticking point is regulatory enforcement, according to Zenner, who said negotiators are trying to take lessons learned from the EU General Data Protection Regulation and apply them to the AI Act. The GDPR's one-stop-shop enforcement mechanism is being scrutinized, for example.
"The European Parliament really wants to centralize everything a little bit more to learn from the GDPR," he said, "but of course member states do not like" losing national competence for enforcement. How powerful the AI office should be remains under debate, he said.
"At the moment," Zenner said, "there's a 50-50 chance" the AI Act will pass before the EU elections, which would mean that if it's not finalized in the coming weeks it may not come to fruition until after the elections, effectively delaying it until early 2025.
"We have too much on the table," Zenner said. "You will never manage to go through everything in every detail, but what could happen" is stakeholders could agree on the "main political points," leaving negotiators only a few weeks to go through all the technical details and the first week of February (the last scheduled date for trilogue negotiations) "to fix the biggest issues on the technical level and then we will still not manage everything."
That will leave the AI Act with "some flaws," he said, which would require a lot of guidance from the European Commission to address gaps left by the last-minute negotiations.
"Most scenarios aren't good," Zenner conceded. "I don't like any of the scenarios, but again, there's still some time."
If you want to comment on this post, you need to login.