The European Union wants to figure out how to properly regulate artificial intelligence, and everyone has an opinion on how to do it.
European Data Protection Supervisor Wojciech Wiewiórowski broached the topic last month, writing
Wiewiórowski addressed AI again during the Annual Privacy Forum 2020 this week, noting that while the European Commission is preparing to tackle this topic, a law created to specifically address AI is not the right path to pursue.
"I’m very cautious about the approach to this exercise. While there are probably many things, especially civil liabilities, that are concerns which need to be addressed, I don’t think we should create a special law for (AI)," Wiewiórowski said. "It’s my perspective that most things connected with privacy are already observed in the (EU) General Data Protection Regulation."
Portuguese Data Protection Authority President Filipa Calvão agreed with Wiewiórowski's assessment. Calvão believes crafting AI legislation may be a challenge that is too difficult to overcome, adding there's no guarantee it would properly protect citizens' rights given its complexity.
Calvão cited a simple reason behind this problem, one that has been a common refrain surrounding any discussion around privacy-adjacent legislation: Technology moves too fast, and the laws simply cannot keep up.
"The difficulty has mainly to do with the development of information and communication technology, in particular in the last decade," Calvão said. "The inability of the law to anticipate or to follow up the technology in an appropriate way is why the idea of (AI) legislation might not be the perfect answer to this problem."
A specific AI law may not be the silver bullet the EU is looking for, but a regulatory framework could be the answer. Wiewiórowski said that should the EU pursue this path, it is necessary for any negotiated framework to include three important principles.
An AI framework, he said, must be designed to protect individuals and society from any negative impacts brought forth by AI and for any potential harm posed by AI applications to be matched with appropriate mitigation measures. It must also ensure EU member states and institutions follow the same set of rules.
"It does not make sense to create a law for the member states and the entities in the member states and another law for the EU institutions," Wiewiórowski said. "They should be the same."
Absent an official framework, the onus may have to be passed onto the organizations developing and using AI. Calvão turned her focus onto AI accountability measures recently approved by the Global Privacy Assembly, citing some of the noteworthy recommendations companies should implement.
"The first one is to assess the potential impact of human rights before the development and the use of (AI)," Calvão said. "Then, test the reliability, robustness, accuracy and the security of the data in that (AI) context before putting it into use, mainly identifying and assessing the biases in the system and making sure that we manage to control those biases because that’s the biggest danger that we’ve got there."
Other GPA recommendations Calvão said should on the radar include providing clear explanations for decisions made by AI systems, monitoring its performance and providing an audit when a regulator comes knocking.
She said data protection authorities will check to see whether AI systems are respecting data subjects' fundamental human rights, adding AI developers and users need to be prepared for both internal and external audits.
It shouldn't come as a surprise to hear that even AI audits can be challenging. European Data Protection Board Deputy Chair Ventsislav Karadjov said since there is still so much to learn about the technology, it may be difficult for regulators to conduct an investigation and for AI developers to provide the answers they need.
"We should not expect the regulators to audit (AI) to a certain extent because, for the regulator to be able to objectively make a decision, they have to know exactly what is going on in the black box, and in most cases, the software developers cannot explain it," Karadjov said.
Whether the EU adopts an AI-focused regulatory framework or organizations take heed of the GPA's principles remains to be seen. The regulators are not sure which way the EU will go, but they understand what AI can do. AI may be the catalyst to jumpstart a whole new era of technological innovation, but it could also be used for nefarious purposes if left unchecked.
"We must ensure that Europe will be growing (AI), but it will not come at the cost of the values that we observe," Wiewiórowski said.
"Today, we know that based on target profiles and probabilities, it is possible to predict future events and future individual behavior with a relatively small margin of error," Calvão said. "The impact on privacy is quite obvious. There’s a power to manipulate individual behavior or control their lives, conditioning and restricting one’s freedom, and there’s still the risk of discrimination based on errors and bias."
Photo by Michael Dziedzic on Unsplash