TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

""

""

The proposed EU Artificial Intelligence Act is anticipated to pave the way for a regulated approach to the future development of artificial intelligence. One means of testing new AI technologies is through regulatory sandboxes created by various data protection authorities around Europe.

To explore how AI regulatory sandboxes are helping companies develop their machine-learning models, IAPP Managing Director, Europe, Isabelle Roccia hosted a Linkedin Live session Dec. 12 with Secure Practice co-founder and CEO Erlend Andreas Gjære, European Commission Legal and Policy Officer Yordanka Ivanova and Project Manager for Research Analysis and Policy at Norway's data protection authority, Datatilsynet, Kari Laumann.

The ultimate place for regulatory sandboxes within the AI Act is still up in the air as some policy makers believe they are vital for entities to test their machine-learning algorithms. Others have been less bullish on their usefulness if companies are resistant to allowing national data protection authorities access to their systems if they are not granted leniency from potential EU General Data Protection Regulation violations while their algorithms are tested in sandboxes.

During an interview for The Privacy Advisor Podcast, AI Act co-rapporteur and Romanian Member of Parliament Dragoș Tudorache said he believes, if legislation is passed, the key to enforcement is for companies using regulatory sandboxes to iron out privacy concerns before systems are introduced for the public. Ultimately, he envisioned each member state operating a sandbox of its own to spur AI innovation continent-wide. He said best practices and expertise could be gleaned from each country’s sandbox, which may allow for better governance and enforcement of the legislation.

Ivanova said the EU Council views the development of AI as “very beneficial.” To minimize the risk of harms and bias in AI, she said the sandbox model of testing algorithms will likely pay dividends for developers who have built models in conjunction with a DPA in its sandbox, especially those training high-risk models.

“We can support companies through the sandboxes … how they should implement new requirements of the legislation, because we targeted them (as) high-risk,” Ivanova said. “We want the competent authorities to create (sandboxes) and give opportunities for companies to apply ... this regulatory advice to get legal certainty how the new rules would apply in their specific case.”

As currently constituted, the proposed EU AI Act does not require each member state’s DPA to create a regulatory sandbox, Ivanova noted. She said her hope is to have more established DPAs take the lead in developing best practices for creating sandboxes to forge a uniform standard going forward.

“We recognize that there is a demand, whether that's something at the pan-European level, or for countries that might not have (as many) resources,” Ivanova said. "The objective is to have standards that would support companies to implement, but they will be always open questions."

Others have been skeptical on the potential of regulatory sandboxes.

DIGITALEUROPE Director for Infrastructure, Privacy and Security Policy Alberto Di Felice, CIPP/E, said sandboxes have been a “neglected” element of the AI Act. In order to incentivize buy-ins from companies developing AI-based products, waivers by DPAs of liability from the GDPR for the handling personal data for purposes of experimentation must be included in the final legislation.

Di Felice's concern is shared by the Center for Digital Innovation whose AI policy analyst, Patrick Grady, called the sandbox provisions of the AI Act "a sandbag" that is "weighing down firms with more regulatory complexity while offering them little respite from existing rules."

However, there are real-world examples to look at. For Andreas, whose company sells security tools and training software to businesses, the decision to open up possible regulatory scrutiny by participating in the Datatilsynet's sandbox program was not taken lightly. During the LinkedIn Live he said while some companies may have reservations about exposing their machine-learning models to the auspices of data protection regulators, doing so would only increase the quality of the algorithms.

"Just the risk of exposing your idea, your system to regulatory authorities, there is a risk. You could get shut down," Andreas said. "The way the Norwegian DPA has been for the last few years, they've been very approachable (in how they're) advising companies. We were confident we would (have) a good two-way discussion, although, in the end, you have to obey the law. You have to be compliant."

As an early adopter of the AI regulatory sandbox model, Laumann said Norway prioritizes innovation using responsibly developed AI. She said their sandbox was developed with input from numerous stakeholders, including representatives from academia and businesses that informed the agency they were able to work within the requirements the Datatilsynet imposed on sandbox users via the GDPR. However, stakeholders told the Datatilsynet they did not have a firm grasp of how to comply “in practice” once they started testing algorithms within the sandbox, she said. 

The responsible AI development goal "is also reflected in the strategies and policies from the EU that you want innovation, but you want it to happen in line with European values and fundamental rights,” Laumann said. “So that was the starting point for our sandbox where we want to try to help companies to understand the rules and to explore the gray areas.”

However, Laumann said the Datatilsynet, for now, will not grant any waivers for potential GDPR violations if the product was to be publicly released. She said the Datatilsynet only offers feedback on potential compliance issues for the given sandbox tester, which are selected through an application process. Following the experimentation phase, each system's developer and the Datatilsynet staff create an impact-assessment report to evaluate the effectiveness of the project’s data protection components.

Lumann said the criteria for participation in the Datatilsynet's sandbox are: The project needs to be AI-based, the project must solve a specific privacy question, the applicant has to be a Norwegian-based company, and the privacy issue the applicant seeks to solve with their algorithm must serve a societal purpose beyond the individual developer's goals. 

“We tried to help many by helping one,” Laumann said. “Our sandbox doesn't give very many perks, except for guidance. We don't give any monetary support, we don't provide infrastructure support, we don't give an exemption from the rules, we don't give a stamp of approval. We basically give advice.”


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.