TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | Are Your Privacy Impact Assessments Stuck in the 1970s? Related reading: Accountability Is About Values

rss_feed

""

The concept of the privacy impact assessment (PIA) has roots in the age of mainframe computers, an age when compliance was key to avoiding regulatory action or consumer backlash. But massive datasets, advanced algorithms and increased data collection points make these age-old PIAs a thing of the past and mean that organizations must go beyond mere compliance into governance. Though big data can help companies innovate and bring tangible benefits to consumers, ethical use of that data is not only being watched closely by privacy advocacy organizations; regulators are watching, too.

At this year’s Privacy. Security. Risk. Conference in Las Vegas, NV, a new PIA-paradigm was presented to help companies engaged in big data to be both innovative and ethical. During the preconference workshop “Big Data Project Vetting to Assure Fair and Innovative Data Use” Information Accountability Foundation (IAF) Executive Director Martin Abrams—together with IAF Senior Scholar Lynn Goldstein, Acxiom Privacy Officer Sheila Colclasure, CIPP/US, and the Better Business Bureau’s Genie Barton—took a deep dive into the need for ethical big data assessments (EBDAs).

Abrams noted that a rise in data breaches and increased regulatory penalties puts the onus on organizations to defend the use of data, and that, often, “reticence risk”—the decision not to process big data because of uncertainty—results in less innovation in the big-data realm. To demonstrate the harms caused by reticence risk, Abrams cited a recent case of mobile phone data in West Africa. It was thought that by tracking individuals’ phone data, researchers could possibly successfully track and stymie the spread of the deadly ebola virus. Yet, that work was never done because virtually every phone company in the region could not find an internal process to approve the disclosure of such data.

To help combat reticence risk, Abrams argued that EBDAs can demonstrate fair use—to both regulators and consumers—thereby making organizations more confident in and accountable of their use of data.

When should an EBDA take place? For Abrams, EBDAs essentially divide big data sets into discovery and application. Under the former, project scoping and discovery processing should be assessed, while the latter would include insights into predicting consumer behavior and reviewing for effectiveness.

In assessing big data, it’s paramount, Abrams argues, that ethics form the basis of decisions that balance the interests of the organization with fairness to the data subject. He promotes what he calls the essential elements of accountability, which helps form the basis of a big data code of ethical practice. The essential elements include a corporate commitment to codes of conduct that connect to outside criteria, such as data protection and privacy laws. Organizations also need a mechanism to apply such a commitment with internal monitoring to assure its implementation. Along with transparency, organizations then must also demonstrate this commitment and implementation.

“When we apply this framework,” said Acxiom’s Colclasure, “all the edgy things your brands are dreaming up, for example, all the preparation becomes critical for demonstrating ethical use,” adding, “The fairness of this data use can often depends on how well you prepared by using this framework.”

Expect more on this in the near future. Abrams said the IAF is set to release a paper next month, complete with insight from regulators, that will propose a internal code of conduct for organizations engaged in the ethical use of big data.

Comments

If you want to comment on this post, you need to login.