This week, privacy and data protection commissioners from more than 100 countries are gathering in Brussels for the 40th annual International Conference of Data Protection and Privacy Commissioners. The debate will focus on digital ethics, including topics that exceed the traditional remit of privacy professionals. For example, how can we ensure due and equitable process in an age where machine learning and artificial intelligence make decisions about road safety, healthcare and education? How can organizations ensure access to individuals’ data for socially beneficial purposes without sacrificing privacy and individual rights? How can societies protect against the threat of malevolent meddling in democratic elections through social media tampering? Facing these issues and challenges, privacy regulators and other data professionals realize that answers lie beyond the regional and national data protection regulations and frameworks and require extension of ethical principles and institutions.
Today, the IAPP and UN Global Pulse release a joint report titled "Building Ethics Into Privacy Frameworks For Big Data and AI." The report provides an overview of how organizations can operationalize data ethics, drawing on the discussions of an event organized by IAPP and UN Global Pulse. It also highlights the findings from additional research about data ethics and international privacy best practices.
The "Building a Strong Data Privacy and Ethics Program: From Theory to Practice" event took place in May 2017 and gathered representatives from international organizations, privacy regulators, academics, NGOs, and business leaders to learn about, discuss and identify data privacy, protection and ethics best practices for using big data for the public good. At the opening of the event, Robert Kirkpatrick, Director of UN Global Pulse, noted that ethical decision-making requires minimizing not only the risk of data misuse but also that of missed use, that is, of leaving crucial data resources untapped in the global fight against famine, plague and war. “As global efforts to develop new frameworks around the responsible use of emerging technologies begin to take shape, it is imperative that they address not only the human rights implications of ‘misuse,’ but also those of ‘missed use,’” he said.
The participation of stakeholders from different sectors opened opportunities to view big data as a resource not only for common business analytics purposes, but for humanitarian and development aid. For example, insights gleaned from big data sources can be used to predict the spread of disease, or prioritize resource allocation, or target relief to the most vulnerable populations, among others.
In addition, getting access to data for scientific research also presents a number of challenges. Tremendous volumes of data, which can provide important information for development practitioners, are today collected by health care facilities, pharmaceutical companies, bioengineering innovators, social media platforms, banking institutions, and online retail services. Policymakers could work to unlock the value of big data by providing access to researchers, not just for medical research but also for social sciences. This, of course, must be done with due regard and protection of privacy.
The report also outlines a number of tools and methodologies, including data protection and ethical impact assessments, that practitioners can use to ensure the responsible development of big data applications. It highlights the need for such assessments to take into account the harms that may be caused to individuals and groups by both the use and non-use of data for public good.
Other recommendations include: setting up Administrative Data Research Facilities (known as ADRFs) to store and regulate access to already-existing data sets; developing Internal review boards (known as IRBs) to vet requests for data access through ethical principles; or establishing external advisory boards or groups. Moreover, promising new technologies increasingly enable the processing of data to draw valuable conclusions while minimizing the effects on individuals’ privacy. Using homomorphic encryption, organizations could conduct analysis on and draw lessons from data in encrypted form.
The IAPP and UN Global Pulse report aims to serve as a basis for public-private sector collaborations to use big data and analytics for the social good, while ensuring individual privacy and respect for human rights. It aims to add to the discussion in Brussels this week, led and hosted by the European Data Protection Supervisor, by extending beyond traditional privacy mores and into the fascinating normative debates that will govern our embrace of new technologies for years to come.
Big data, new technologies, and new analytical approaches, if
applied responsibly, have tremendous potential to be used for the public good. At the same time, big data amplifies risks to privacy, fairness,
equality, and due process. The United Nations Global Pulse and the International Association of Privacy Professionals explored these issues and discussed their many aspects at a jointly hosted event: “Building a Strong Data Privacy and Ethics Program: From Theory to Practice,” held in May 2017 in New York (hereafter referred to as the “UN GP/IAPP event”). Now, the two organizations co-release a white paper, “Building Ethics into Privacy Frameworks for Big Data and AI,” which builds on that event and extensive research into the state of ethical frameworks and how they’re operationalized for handling the challenges of big data and AI going forward.
Read the Report
If you want to comment on this post, you need to login.