The winds of regulatory oversight for artificial intelligence are blowing in the U.S. and Europe. The European Commission signed off on its Ethical Guidelines for Trustworthy AI earlier this month, the culmination of several months of deliberations by a select group of “high-level experts” plucked from industry, academia, research and government circles. In the advisory realm, the EU guidance joins forthcoming draft guidance on AI from a global body, the Organization for Economic Cooperation and Development.
Meanwhile, U.S. federal lawmakers want something on the books. A new bill proposed by Sen. Ron Wyden, D-Ore., Sen. Cory Booker, D-N.J., and Rep. Yvette D. Clarke, D-N.Y., would require large corporations to subject their algorithmic systems to automated decision system and data protection impact assessments. And, in February, U.S. representatives proposed their own guidelines for ethical AI in a
Four principles or “ethical imperatives” call for AI systems to respect human autonomy, prevent harm, incorporate fairness and enable explicability. Another layer of guidance advises that AI respect human dignity, individual freedom, democracy, justice, the rule of law, equality, non-discrimination, solidarity and citizens’ rights. The document then translates those goals into concepts that apply directly to more technical considerations for AI, such as resilience to attack, data quality and privacy, avoidance of unfair bias, auditability, transparency and explainability for AI-based decisions. Finally, a list of more practical methods for implementation follows.
The meaning of lawful AI
The official guidelines make clear that “A number of legally binding rules at European, national and international level already apply or are relevant to the development, deployment and use of AI systems today.” This emphasis on “lawful” AI seems to be an important distinction between this final version of the document and an earlier draft.
“The emphasis on Lawful AI in the official guidelines indicates that the implementation of existing law is part of the practice of AI Ethics,” said High-Level Expert Group Member Gry Hasselbalch, a co-founder of Denmark-based data policy think-tank DataEthics, in an email sent to Privacy Tech.
We can expect lawyers and decision-makers to read the tea leaves sprinkled throughout this document to get a sense for whether the prominence of the lawfulness concept indicates that current European law, such as the General Data Protection Regulation, the EU Charter of Fundamental Rights and anti-discrimination directives suffice in delivering protections against potential AI-enabled harms or not.
There still is debate as to whether the GDPR’s right to an explanation applies to AI and decisions made by autonomous technologies. However, Chris Hankin, co-director of the Institute for Security Science and Technology and a professor of Computing Science at Imperial College London, said he thinks that aspects of that privacy regulation already address ethical AI considerations, particularly the right to an explanation.
Bottom line, the question people in the AI industry or entities employing AI want answered is: “Will Europe regulate or establish new laws for AI?”
Hasselbalch said there’s no telling yet. “We are an independent expert group, so what the official EU system will do, we don’t know. But I assume that they will listen to the experts they appointed themselves.”
The EU will soon be joined by the OECD’s Committee on Digital Economy Policy, which is set to publish its own draft recommendations for intergovernmental AI policy guidelines in May, based on final guidance approved by its global expert group representing policy, privacy and corporate tech earlier this year. Those recommendations are expected to mirror some of those seen in the EU guidance including human-centered values and fairness, transparency and explainability, robustness, safety and accountability.
Many of these same principles are present in other guidance for ethical AI from Association of Computing Machinery and the Wesley Tingey on Unsplash