Data lifecycle practices have appeared as standard issues on third-party risk-management checklists for years, particularly in regard to privacy and data monetization. Privacy pros have been asking good questions, and drafting provisions to cover data collection minimization, use limitations, and secure deletion, but have we thoroughly thought through the potential for third party exploitation of the data, and are we doing enough to monitor and enforce these commitments after the contracts are signed?
The Guardian’s ground-shaking story on researcher Christopher Wylie’s work with Cambridge Analytica reminds us that data monetization is not the only concern we should have about data exploitation.
Though few companies have the trove of personal data that Facebook has, most companies do collect personal data, and then share that data with third parties for various reasons. Data processing, analytics, and storage in furtherance of the ordinary course of business often seem innocuous enough, and common contract terms limit the third party’s use of the data in proportion to the nature of the data and use purpose. But are we being too lax in determining proportionality, and thus missing significant risks?
But are we being too lax in determining proportionality, and thus missing significant risks?
Reportedly, Facebook thought it was allowing a CA partner to pull data for academic purposes only. Data access for academic research tends to be more open than access for commercial uses, so Facebook may have scaled its controls according to the purported use, and thereby let down its guard. According to Wylie, “Facebook could see [massive data collection] was happening. Their security protocols were triggered because [University of Cambridge professor Aleksandr Kogan]’s apps were pulling this enormous amount of data, but apparently Kogan told them it was for academic use. So they were like, ‘Fine’.”
It is not uncommon for a client or vendor to argue certain contractual protections are not required because the purported use of the data is limited or noncommercial. Though the third-party misappropriation of data for “psychological operations” that occurred here may not be a universal concern, a third party insisting its data use is of limited or innocuous scope is not a reason to let your guard down. Monitoring and audit rights may initially appear to be unnecessary, but if you do not secure the right to monitor and enforce compliance with use restrictions and other data lifecycle commitments, you will not have a mechanism to ensure that the third party managed the data to your expectations.
Though it is true that many companies do not have the bargaining power, or regulatory lever, to secure robust audit rights via their third-party contracts, Facebook did have that power, and reportedly did secure the right to audit its contractor. However, according to a former privacy manager, Facebook never enforced its right.
Similarly, Facebook reportedly did not enforce compliance with secure deletion commitments. Although, as the Guardian reported, “[t]here were multiple copies of [the data]. It had been emailed in unencrypted files,” Facebook did not attempt to retrieve the data or verify secure deletion. “Facebook had asked the parties [in 2015] to certify they would not abuse data, but it did not take further action beyond that warning...” Secure deletion is a critical, but often over-generalized and under-enforced, contractual commitment. Any time a company shares its data, particularly personal data of its customers, the company should have firm policies and commitments addressing how and when the third party will return and/or securely delete the data, and what and when the third party must certify. When the data is no longer needed for the approved use, the company providing the data must enforce its secure deletion requirements.
Though the Facebook story is uncommon in its potentially profound political impact, and the facts are still unfolding, many of the underlying issues look like common issues in third-party risk management. Spotting the issues and knowing what rights to secure is an important first step, but we must not fail to monitor and enforce those rights. In some cases, the people in the position to enforce the rights are not informed that the rights exist. In others, we know we have the rights, but we either do not enforce or do not adequately enforce.
Given the global consequences, and the increasingly frequent cross-border transfer of data, third-party data management practices will likely attract even more regulatory scrutiny following the Facebook and CA allegations.
Given the global consequences, and the increasingly frequent cross-border transfer of data, third-party data management practices will likely attract even more regulatory scrutiny following the Facebook and CA allegations. As the Article 29 Working Party issues final guidance ahead of the rapidly approaching deadline for GDPR implementation, the ePrivacy Regulation continues to be deliberated, and other countries draft and update their own privacy regulations, these grey area data management practices will undoubtedly find their way into defined, though potentially unwieldy, regulations and guidelines, which will require companies to give further attention to third party data management practices.
If you want to comment on this post, you need to login.