Following the success of the General Data Protection Regulation in setting the global standard for data protection, the European Union is doubling down on its position as the ethical regulator for technology.
On April 9, a high-level expert group set up by the European Commission and comprising 52 independent experts representing academia, industry and civil society presented its first set of ethical guidelines for artificial intelligence.
Specifically, the aim is to “build trust” in AI by establishing clear policies for developing and providing AI tools. More generally, it is part of how Europe plans to export norms and values — just like with the GDPR.
“Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust,” Commission Vice-President for the Digital Single Market Andrus Ansip explained.
During a so-called pilot phase that will begin this summer, the commission wants industry, research institutes and public authorities to test-drive the guidelines and give feedback.
Organizations can sign up to the European AI Alliance to receive a notification when the pilot starts. The commission also plans to launch a set of networks of AI research excellence centers before the end of the year.
Cecilia Bonefeld-Dahl, director-general of Digital Europe and a member of the expert group, explained, “This means that the high-level expert group will receive detailed practical feedback before finalising the document. Only through an agile process and real-life sandboxing of the proposal can we learn and avoid unforeseen consequences of policy making. Looking at how to apply AI in particular, there are extensive benefits to be realized in society. We need to get it right in order to drive European innovation and welfare and to avoid the risks of misuse of AI. We outline the common European values and principles that AI should respect.”
And many big organizations are on board.
Vodafone Group Privacy Officer Mikko Niva told The Privacy Advisor, “Vodafone uses AI to improve our products and services for our customers, such as chat bots to enhance customer service, and to run our business more effectively. As a company with European roots, the EU’s draft AI ethics guidelines reflect Vodafone’s own approach, ethics and privacy policies to protect consumers while leveraging trustworthy technology as a competitive advantage. Just as Europe set a global standard with GDPR, these guidelines will also provide much welcomed global principles in AI.”
ETNO, the association representing Europe’s telecom operators, said, “It is crucial for the European Union to support industrial leadership in this field, so that citizens can also choose from solutions inspired by European values. We recognise that implementing and testing the assessment list will be a huge task, but we support a flexible approach built on regular feedback from adopting organisations to allow for adjustment of the guidelines.”
However, the Center for Data Innovation was, as always, skeptical of any attempt to curtail the impunity of the AI sector. Senior Policy Analyst Eline Chivot said the new ethics guidelines are "a welcome alternative to the EU’s typical 'regulate first, ask questions later' approach to new technology. However, the document falls short in a number of areas. Most importantly, it incorrectly treats AI as inherently untrustworthy and argues the principle of explicability is necessary to promote public trust in AI systems, a claim which is unsupported by evidence.”
“Most importantly, the belief that the EU’s path to global AI dominance lies in beating the competition on ethics rather than on value and accuracy is a losing strategy. Pessimism about AI will only breed more opposition to using the technology, and a hyper focus on ethics will make the perfect the enemy of the good,” she continued.
According to the guidelines, there are seven essentials for achieving trustworthy AI: human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, fairness and accessibility; societal and environmental well-being; and accountability.
It would be difficult to argue with any of these principles, but some privacy activists believe they are too vague and nebulous to be practically implementable.
European consumers organization BEUC, international digital rights NGO AccessNow, and ANEC, the European association representing the consumer voice in standardization, in principle welcomed the guidelines but said the commission should go further and carry out “comprehensive mapping of existing legislation that applies to AI development and deployment, and an identification of legal uncertainties and gaps,” as well as update existing legislation, where needed, “particularly in the fields of safety, liability, consumer and data protection law.”
In terms of privacy and data governance, the guidelines state that “citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.” But ensuring that AI applications do not undermine the right to data protection is trickier in practice.
Allowing meaningful control by data subjects over data processing and its effects would require a greater understanding of AI processes than the average individual has. Under the GDPR, data subjects also have the right to demand that important decisions made about them do not rely wholly on recommendations provided by AI.
Riccardo Masucci, global director of privacy policy at Intel, said decisions made by autonomous technologies may have an impact on individuals' "possibility to self-determination." He said increased automation shouldn't translate to less privacy.
"We think organizations should embrace accountability approaches to minimize risks for individuals and deliver privacy by design," Masucci said. "These include impact assessments throughout the product lifecycle as well as investments in technical solutions like homomorphic encryption and federated learning that we deem very promising developments in the context of AI. Additionally, we believe that risk-based degrees of human oversight could represent a practical solution to allow for innovative uses of AI and to ensure control over autonomous decision making when necessary."
Katherine O’Keefe, head of research and training at Castlebridge in Ireland, explained that the GDPR and the ethical guidelines "both come from a very European human rights-based perspective, there isn't as of yet any sort of conflict. They both come from the same source and keep the same focus. I think that’s possibly a weakness in the ethical guidelines in general because we need to consider other ethical viewpoints when it comes to guidelines for AI. But I don't see a conflict currently.
“The next step with the guidelines will be to see where they hit the ground. At the moment, they are quite vague, and they're looking at more specific use cases and things like that. However, the GDPR is a useful example of how ethical principles can become actual regulation that must be followed,” she continued.
But O’Keefe also sees some drawbacks in the current guidelines.
“Specifically one of the big gaps that I see is that it is wholly focused on the European human rights perspective. And when it comes to global application of ethics, we need to take into account other ethical frameworks such as for instance in Asian philosophies and the many indigenous ethical frameworks where there is a much stronger focus on relationality. So that's one of the things that needs to be considered as well,” she added.
“As guidelines go, there aren't even any red lines in the final draft. But they are, as far as I can see, useful from a strategic perspective or C-level, which would hopefully guide organizations at a leadership level rather than having regulations for the DPO to be working with. So hopefully it should dovetail with the GDPR rather than add more of a burden,” O’Keefe told The Privacy Advisor.
She added that when it comes to AI ethics, there is no need to reinvent the wheel.
“AI ethics is a form of applied ethics, it’s not new ethics. We don't need to create new ethics for AI. We need to figure out how to understand and apply ethics in this particular applied position. There are many areas of applied ethics where we can learn a lot from what has already been done. We need to take a holistic approach. We need to remember that ethics guide laws and ethics never work on their own. It's not only ethics or only the law. Ethics come before, in the middle, and after legislation. So the guidelines should eventually guide binding legislation and red lines that must be followed,” she concluded.
Photo by Alina Grubnyak on Unsplash