TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | Should we create a certification for AI ethics? Related reading: Web con: 'Data Ethics: Beyond Legal Compliance and Customer Centricity'

rss_feed

""

What does it mean to certify artificial intelligence ethics? Should one certify AI ethics? These are the key questions in a debate around projects, taking place under the auspices of the IEEE Standards Association, that aim to address ethical issues relating to the creation of autonomous and intelligent systems.

The IEEE Standards Association marshals the creation of technical standards for things like wireless communications and software validation. It doesn't initiate standards projects on its own, but rather provides a forum through which those who want to create a new standard can form working groups and do their thing — once they have a draft, it can enter the official approval process to become an IEEE standard.

In 2015, the association set up something called the Global Initiative on Ethics of Autonomous and Intelligent Systems, which it described as an "incubation space for new standards and solutions, certifications and codes of conduct, and consensus building for ethical implementation of intelligent technologies."

With a community of roughly 800 people, many of whom are volunteers, the Global Initiative has created a weighty, still-under-development document on Ethically Aligned Design, known internally as "the book," and come up with ideas for various standards projects. These are known as the P7000 series, including around a dozen putative standards for things like "transparency of autonomous systems," a "data privacy process," "algorithmic bias considerations," an "ontological standard for ethically driven robotics and automation systems," and so on.

Fast forward to October this year, and the IEEE took things a step further by launching an "industry connections program" called the Ethics Certification Program for Autonomous and Intelligent Systems. This new group is only open to paying members of the standards organization's Advanced Corporate Membership program, although that doesn't mean it's only for companies — government bodies and civil society groups can also participate, as long as they pay up.

"From research we conducted, there is a lot of evidence that many organizations — profit, non-profit — are not only saying 'Should we be accountable in relation to autonomous systems?' but rather 'How do we do these things?'" said John Havens, the Global Initiative's executive director. "Now people can start to demonstrate: 'We're not just saying A/IS is important, but here's an impact assessment for our algorithms.'"

The idea is to create a series of set processes with which companies can comply in order to demonstrate "ethics certification" — a third party would judge conformity, much as an outfit like TRUSTe does in the privacy world.

Not everyone thinks this is a great idea.

"Let me *specifically disavow* @IEEEorg's efforts to create an ethical certification program," tweeted Ryan Calo, a University of Washington School of Law professor and a member of the Global Initiative's law committee in late October. "IEEE is an important organization we should look to for thought leadership. But offering an ethical certification is as dangerous as it is premature.

"You simply cannot certify ethics, and purporting to do so gives cover to organizations interested in moving past AI's real social impacts," Calo continued. "We tried this for *decades* in privacy, and we know how that worked out."

Calo explained to The Privacy Advisor that he meant "the allure of self-regulation is that no actual regulation is necessary because companies have imposed limits on themselves. Society should set the constraints around artificial intelligence in the form of law and official policy," he said.

As for the reference to failed privacy certification schemes, Calo pointed to the U.S. Federal Trade Commission's enforcement action against TRUSTe (over deceiving consumers by not conducting annual re-certifications) several years back, as well as the FTC's similar 2010 settlement with privacy and security certification outfit ControlScan. "I draw the lesson that this is a bad model," he said.

Responding to Calo's criticisms, Havens stressed that ECPAIS is not a self-certification framework. He also noted that the references to ethics did not involve "ethics in the sense of morals or cultural values," so much as accountability.

"The bigger goal is to say, 'Are we [as in, a company using these processes] building what we said we would,' and more importantly 'How can we recognize more unintended negative consequences to make sure the thing we're designing avoids risk or harm and increases safety or trust?'," Havens explained.

To Calo's point about the need for laws and official policies around autonomous and intelligent systems, Havens said these were not mutually exclusive with the certification processes being built. "A lot of times, standards will be referenced in policy," he said.

This was also a point raised by Matthew Stender, a Berlin-based tech ethicist and researcher who is involved in the Global Initiative algorithmic bias group. "If these standards don’t exist, then policymakers have to write the rules from scratch," he said. "If we take the idea of a technical standard or certification, [formulated] outside of traditional [political] pressures, I'd say that is a net good. That is a contribution to thought, a complement to a discourse in which there is perhaps no right, there is no absolute, the rules have not yet been written."

Stender suggested that the alternatives to the IEEE-convened discussions in question would involve either "toolkits by consultancies" or government strategies. Regarding the latter, he noted that governments tend to see AI as a matter of national priorities, around issues such as labor markets, but the development of the technology is taking place in a transnational context.

Asked whether there was another way to achieve ECPAIS's aim of providing a way to gauge trustworthiness, Calo responded: "There's the way we do it in literally every other sector: health, energy, finance, agriculture, transportation, and so on. Through regulation.

"Although people have said that tech companies should be regulated like utility companies, I have not seen a workable model that scales to the internationalness of these technology companies," said Stender in response to that point. The ethicist added that — certainly in the U.S. — regulators' hands were tied by trade secret laws and the "speech is code" model. "For me, the idea of voluntary technical standards provide an interesting alternative to national legislation," he said.

photo credit: PGBrown1987 via photopin

Comments

If you want to comment on this post, you need to login.