The winds of regulatory oversight for artificial intelligence are blowing in the U.S. and Europe. The European Commission signed off on its Ethical Guidelines for Trustworthy AI earlier this month, the culmination of several months of deliberations by a select group of “high-level experts” plucked from industry, academia,  research and government circles. In the advisory realm, the EU guidance joins forthcoming draft guidance on AI from a global body, the Organization for Economic Cooperation and Development.

Meanwhile, U.S. federal lawmakers want something on the books. A new bill proposed by Sen. Ron Wyden, D-Ore., Sen. Cory Booker, D-N.J., and Rep. Yvette D. Clarke, D-N.Y., would require large corporations to subject their algorithmic systems to automated decision system and data protection impact assessments. And, in February, U.S. representatives proposed their own guidelines for ethical AI in a House Resolution.

The EU’s guidance hinges on the notion of trustworthy AI or AI that is lawful, ethical and robust. The fact that the detailed Brussels guidance went through a collaborative and likely combative multi-stakeholder tuning process is evident in its length and complex Russian doll-esque structure of components, principles and requirements. Still, coming from the same body that gave us the tectonic privacy plate-shifting GDPR, it could influence AI ethical standards across the globe.

Four principles or “ethical imperatives” call for AI systems to respect human autonomy, prevent harm, incorporate fairness and enable explicability. Another layer of guidance advises that AI respect human dignity, individual freedom, democracy, justice, the rule of law, equality, non-discrimination, solidarity and citizens’ rights. The document then translates those goals into concepts that apply directly to more technical considerations for AI, such as resilience to attack, data quality and privacy, avoidance of unfair bias, auditability, transparency and explainability for AI-based decisions. Finally, a list of more practical methods for implementation follows.

The meaning of lawful AI

The official guidelines make clear that “A number of legally binding rules at European, national and international level already apply or are relevant to the development, deployment and use of AI systems today.”  This emphasis on “lawful” AI seems to be an important distinction between this final version of the document and an earlier draft.

“The emphasis on Lawful AI in the official guidelines indicates that the implementation of existing law is part of the practice of AI Ethics,” said High-Level Expert Group Member Gry Hasselbalch, a co-founder of Denmark-based data policy think-tank DataEthics, in an email sent to Privacy Tech.

We can expect lawyers and decision-makers to read the tea leaves sprinkled throughout this document to get a sense for whether the prominence of the lawfulness concept indicates that current European law, such as the General Data Protection Regulation, the EU Charter of Fundamental Rights and anti-discrimination directives suffice in delivering protections against potential AI-enabled harms or not.

There still is debate as to whether the GDPR’s right to an explanation applies to AI and decisions made by autonomous technologies. However, Chris Hankin, co-director of the Institute for Security Science and Technology and a professor of Computing Science at Imperial College London, said he thinks that aspects of that privacy regulation already address ethical AI considerations, particularly the right to an explanation.

Bottom line, the question people in the AI industry or entities employing AI want answered is: “Will Europe regulate or establish new laws for AI?”

Hasselbalch said there’s no telling yet. “We are an independent expert group, so what the official EU system will do, we don’t know. But I assume that they will listen to the experts they appointed themselves.”

The EU will soon be joined by the OECD’s Committee on Digital Economy Policy, which is set to publish its own draft recommendations for intergovernmental AI policy guidelines in May, based on final guidance approved by its global expert group representing policy, privacy and corporate tech earlier this year. Those recommendations are expected to mirror some of those seen in the EU guidance including human-centered values and fairness, transparency and explainability, robustness, safety and accountability.

Many of these same principles are present in other guidance for ethical AI from national governments and trade groups, including the Association of Computing Machinery and the Institute for Electrical and Electronics Engineers.

Next step: EU policy

For the EU, establishing advisory guidance is just the first half of the process. As noted in the guidelines, “To the extent we consider that regulation may need to be revised, adapted or introduced, both as a safeguard and as an enabler, this will be raised in our second deliverable, consisting of AI Policy and Investment Recommendations.” That policy crafting is underway currently.

Hankin suggested the European Commission could use the upcoming policy component of the guidelines to influence discussion around a directive or potential regulation. “I wouldn’t be surprised to see the Commission pick up the policy document when it’s produced and use it to start a debate … but we’re at the start of a very long process,” he said.

Photo by Wesley Tingey on Unsplash