The increasing sentiment around artificial intelligence regulation is global policy makers are being left to put a square peg in a round hole given the speed at which AI development is proliferating. It begs the question whether regulation can be done "right" and what that looks like, especially with the EU looking more and more likely to stop the clock on implementation of the landmark AI Act with an eye toward reassessing its regulatory approach.
Speaking at the IAPP and Berkman Klein Center For Internet and Society's Digital Policy Leadership Retreat 2025, Irish Member of European Parliament Michael McNamara indicated all signs point to an AI Act pause because stakeholders simply "need time and need to know what it is they have to adhere to." But he warned attendees about the potential perils of any sort of pause.
"The AI Act is very far from perfect, but I do think it was a welcomed attempt to govern in this area," McNamara said. "I think a delay is acceptable, but there comes a point at which any delay, if it's for a long time, just kind of deprives (the regulation) of the momentum it needs to work. That would be a concern."
The signs are clear, according to McNamara. Escalated pressures, notably led by the U.S., around perceived burdens brought on by EU digital regulation is one factor while a lag in providing essential deliverables for AI Act implementation is another.
The chief concern around implementation derives from the European Commission stretching out the deadline to release the general-purpose AI code of practice, which aims to help AI Act covered entities better understand and prepare for the act's GPAI requirements to take effect 2 Aug.
McNamara said the release of the code before the GPAI requirements take force "looks ambitious now," making an implementation delay a logical option.
"It hasn't been finalized yet and the date it was expected to be finalized was 2 May. Obviously that's passed and we don't see any immediate finalization (coming soon)," said McNamara, adding that covered entities are are focused on the code as a "presumption of compliance."
Alternative approaches
The EU is not alone in trying to find a regulatory balance on AI. Japan and South Korea offer recent examples of frameworks that divert from the AI Act while U.S. state-level legislation ranges from covering cross-sectoral AI development and use to more targeted legislation, including bills on automated decision-making and deepfakes.
OpenAI Associate General Counsel for AI Policy and Regulation Ben Rossen, CIPP/US, told retreat attendees AI-specific legislation is in flux, but that does not mean companies do not have existing statutes in sight when they are developing and using new technologies.
"In some ways, there is a host of regulation that already exists. There's consumer protection law, tort law, product liability law and all these things that already exist to regulate AI," Rossen said. "And yet, the very common perception is that AI is still largely unregulated."
The application of new or existing laws remains a point of friction. Companies cannot be left uninformed, according to Guido Scorza, board member for Italy's data protection authority, the Garante, and enforcers must take seriously their responsibility to spell out the law clearly.
"The tension between innovation and regulation isn't new at all," Scorza said. "We were, and probably still aren't, always able to to give industry legal certainty in time. That's our most important responsibility, because it's our duty to recognize if society is changing and needs a faster regulatory solution than in the past."
The panel discussed the potential for more self-regulation among AI companies in the absence of hard rules.
Rossen said context is important, as broad self-regulation over AI "does not strike anybody in industry as a responsible way of regulation." However, he indicated a common "preparedness framework" currently adopted across large AI developers is creating foundational standards.
While the framework isn't identical among companies, the aim to evaluate safe AI capabilities while emphasizing risk assessment is a common priority.
"There are huge incentives already for companies to take the challenge that AI poses extremely seriously, regardless of regulation," Rossen said.
Scorza said he "can't accept" self-regulation, noting AI's inherent connection to fundamental rights, including speech and privacy, leaves "no space" for companies to police themselves. Instead, he pitched co-regulation where policy makers set a flexible framework aimed at closer cooperation with companies.
Policy makers are left to regulate what is "being deployed in the public space," according to McNamara, making self-regulation a measure for developers' internal practices.
"What people do in the privacy of their own labs is a different matter," McNamara said. "That's when their own regulation, boards, etc., come into play. And quite frankly, it's relationships that they have with states and nation states because there are close links."
Joe Duball is the news editor for the IAPP.