Data has become an imperative component of business, from operational decision-making to artificial intelligence product development. However, amid ethics scandals and near-constant privacy concerns, corporations and researchers are questioning long-accepted data-harvesting and research practices. Meanwhile, attempts to establish internal ethics review bodies are subject to intense scrutiny.

Some believe traditional ethics review processes must be updated for the modern age, and efforts underway from legislators, the Future of Privacy Forum and a National Science Foundation–funded project aim to do just that.

When NBC last month exposed how photo app provider Ever employed images from its users’ photos to train the Ever AI facial-recognition system, the practice raised hackles among everyday consumers already wary of creeping personal data exploitation at every turn. The report followed an earlier NBC story revealing IBM’s use of Flickr photo data to create a diverse data set intended to produce more fair facial-recognition systems.

The thing is, scraping from publicly available data sources is so commonplace among academic and corporate researchers in AI and other data-reliant fields, the practice may not have raised an eyebrow internally at Ever or IBM.

Other corporate research has also triggered alarms in recent years — perhaps most notably Cambridge Analytica’s use of Facebook quiz data to inform analytics and targeting used in the 2016 presidential election and Facebook’s own 2014 “Polonetsky and others argue that traditional protocols for academic research involving human subjects, namely the Belmont Report of 1979 that guides Institutional Review Boards in academic settings, should be adapted to the realities of today’s corporate data use. “The traditional privacy rules and laws and debates don’t really take into account concerns that are really driven by imbalances in power,” said Polonetsky, referring to often-disregarded impacts on marginalized communities.

Already the FPF has advised that corporations undergo an ethical review process before conducting internal research involving genetic data and data generated by wearables, such as fitness tracking devices. At this early stage, the organization is unsure whether its ethics review process might involve a board, a community group, outside experts, or would be internal or external in nature.

The trials of going public

Axon, maker of Taser guns and video-streaming body cameras for law enforcement, has held three in-person meetings of its AI and Policing Technology Ethics Board, launched in 2018. Based on the board’s first recommendation, the firm held a full-day bias awareness training session in May for Axon AI developers based on the Fair and Impartial Policing training it has presented to police departments.

The company, whose law enforcement products have been criticized by civil rights groups, currently employs AI to redact images such as faces or other identifiable objects from police videos made available through Freedom of Information Act requests. Axon stresses that it does not offer, nor is it developing, facial-recognition technology that could be used for identification purposes.

“When we were forming the board, one thing the experts we talked to said was critical was that we had to be as transparent as possible,” Axon Ecosystem Vice President Mike Wagers stated in an email to Privacy Tech. “That included announcing the formation of the board, listing board members publicly on our website, publishing how we would relate to the board (operating principles, also listed on the website) and distributing any reports the board produces.”

Going public doesn’t always go well.

In late March, a highly publicized ethics board attempt by Google prompted a visceral backlash and may have deterred some firms from creating their own ethics review bodies. The company’s choice to include board member Heritage Foundation President Kay Coles James, a conservative think tank whose stance against LGBTQ rights was considered inherently unethical by many AI ethics activists, created the most controversy. Hundreds of Google employees signed a petition to remove her before the company decided to shut the board down entirely after about a week.

Rather than focusing on making a splashy announcement, Google should have given more consideration to building a group with credibility, Markkula Center for Applied Ethics Director of Technology Ethics Brian Patrick Green said. “Google was making too big a deal out of it.”

Referring to Axon’s board, Wagers said, “You have to understand you will make mistakes and not get everything right. We did not hold up kicking off the board in April 2018 until we had everything right; we wanted to get started and have the board advise us on how to continually improve things, such as the diversity of the board and how it operates.”

The way companies respond to ethical considerations in terms of actual research and product decisions that affect the bottom line is what counts, said Cherri Pancake, president of the Association of Computing Machinery, which recently updated its ethics code. “It’s so easy to make apple pie and motherhood statements about what needs to be done,” she said. “It’s a very different thing to put it into place and have it be effective.”

Beyond legal compliance

Companies are also under pressure to ensure that ethics review processes are more than just moral posturing. Google’s short-lived board would have met four times this year to evaluate the firm’s AI technologies and had little authority.

Google’s ethics board “didn’t go far enough,” University of Maryland College of Information Studies Assistant Professor Jessica Vitak said. “If you are going to create this group and they’re going to meet four times a year and they have no actual power, what is the point?” she said.

Vitak is a principal investigator in the NSF-funded Pervade project examining ethics approaches related to pervasive data or data containing rich personal information generated through digital interaction accessible for computational analysis. Currently, project researchers are surveying academic Institutional Review Board members regarding their protocols involving pervasive data.

Ultimately, the confluence of efforts to update corporate data and research ethics portends a broader move away from a focus on mere legal compliance toward embedding ethics-by-design that considers all stakeholders.

Legislators spot a need for more sound corporate data research ethics, too. In April, Sens. Mark Warner, D-Va., and Deb Fischer, R-Neb., introduced the DETOUR Act prohibiting online operators with 100 million users or more from segmenting consumers into groups for behavioral or psychological studies, except with the informed consent of each user. The bill requires that such firms establish an IRB for any behavioral or psychological research conducted with user data and give the board authority to review, approve, modify or disapprove such research.

Ultimately, the confluence of efforts to update corporate data and research ethics portends a broader move away from a focus on mere legal compliance toward embedding ethics-by-design that considers all stakeholders.

Said Association of Computing Machinery's Cherri Pancake, “This business about doing things to comply really doesn’t make sense in a field that is so far ahead of regulatory and legislative controls.”

Photo by Markus Spiske on Unsplash