If you're contracting with an outsourced DPO, there are plenty of boxes to check during the hiring process. But once completed, the DPO will have a list of their own to conquer. Here are some things to consider.

Once the necessary data protection officer skills are identified, the specific DPO is chosen, and the DPO services contract agreed upon with the controller/processor, the DPO can begin to undertake the tasks specified under Article 39 of the GDPR. In performing the outsourced DPO role, lots of interesting questions can arise concerning how various data protection scenarios will be evaluated under the GDPR. Given that no one has definitive answers yet, it is useful to examine all angles.

The following are four actual anonymized issues that have arisen in the outsourced DPO role thus far. 

Vulnerability testing

Any company with a website or app should have vulnerability testing performed on that website/app to enhance its security. The question is whether the firm providing vulnerability/penetration testing services is a processor/sub-processor requiring a GDPR-compliant agreement? When firms are hired to test web-facing applications and websites to find weaknesses, they use a variety of techniques to search for existing and potential vulnerabilities. These include performing tests to identify weaknesses such as buffer overflow and SQL injection, security misconfigurations, insecure authentication and access controls, and cross-site scripting. Any of these, and other application and system configuration vulnerabilities, alone or in combination could result in an unauthorized disclosure of personal data, triggering the duties under Articles 33 and 34.

The vulnerability testing firm would then seem to be a processor/sub-processor operating under the instructions of the controller, despite the very limited likelihood they will gain access to personal data. This would trigger their Article 28 processor duties and the need for a GDPR controller-processor agreement.

On one hand, the likelihood of them accessing personal data is very small — and it is not their intention to process any personal data, just identify vulnerabilities. On the other hand, the attempt to find vulnerabilities could always lead to the possible disclosure of personal data.

Is any entity that has a chance of processing personal data therefore at least a processor? Or does the likelihood need to exceed 50 percent? What is the role of intent, despite its absence from the definition of processing under Article 4(2)? Does the encryption of the personal data or its type and quantity impact the analysis? How about the types of vulnerability testing scans being done or the level of coding standard (e.g. OWASP ASVS) or information-security standard (e.g. ISO 27001) achieved on the code/system being scanned? The takeaway? The easiest solution may be to always consider any vulnerability testing firm requiring a processing agreement.

Photographic images 

Photographs are used all over the web from social media to advertising but may fall into a special category of personal data if they are used to authentication and identification and are processed by technical means. Exactly what type of “technical means” used for authentication and identification change a photographic image from personal data to biometric data?

Under existing data protection law, photographic images appear to be considered a form of biometric data. For example, in the Williems case, the CJEU identified a facial image as biometric data, citing Reg. No. 2251/2004, which states that “biometric identifiers [facial images and fingerprints] should be integrated in the passport or travel document in order to establish a reliable link between the genuine holder and the document.”  

However, under the GDPR, Recital 51 states that photographs should not automatically be considered biometric data unless they are “processed through a specific technical means.” Biometric data is a special category of data requiring extra protections defined in Article 4(14) as “personal data resulting from specific technical processing related to physical, physiological, or behavioral characteristics of a natural person, which allow for or confirm the unique identification of the natural person such as facial images or dactyloscopic data." 

An example of specific technical means could be the explanation given in the Working Party's WP192 document, which identifies the steps in facial recognition to include acquisition of a digital image, face detection, feature extraction and comparison. Seeing this as a continuum, “technical means” could be considered any step after acquisition of a digital image. If the authentication and identification process using the digital image is performed by a person instead of purely technical means, does that mean the photo is not considered biometric data? If that person uses computer software to help enlarge or sharpen an image, are those to be considered to be technical means and then the photo is considered biometric data?

This topic needs further clarification from data protection authorities as to where the boundaries are and special focus for companies using digital images for identification and authentication on the type of processes they use.   

Unified consent

Consent is required to be able to load an app onto a mobile device and then get information from it, but this can lead to lots of requests for consent.  Unifying these requests allows the user to more efficiently start using the app. In practical terms, how is unified consent achieved for installing an app on a device, accessing information on the device, and collecting and processing personal data? Under the GDPR, consent (assuming consent is the legal basis for processing) is required to be unbundled and granular, meaning that consent must be separately obtained from other terms and conditions and for each category of data. Under ePrivacy Directive Article 5(3), consent must be obtained to first load an app on a device and then to access information (not just personal data) on the device.

Assume that a mobile device app will collect the name and health data of the individual. This means that there are four different types of consent in play: that to install the app; to access information on the device; to process the personal data (the user’s name); and to process the special category of personal data (the user’s health data). How should the requests for consent to each of these distinct consent requirements be presented — as one, two, three, or four requests — considering the information that must be presented for each consent would be somewhat different? How would this change if the app is gathering information from the mobile device itself, such as device or location information or the user’s contacts? Balance is important, so the required informed consents are gathered with the least amount of disturbance to the app user’s experience, perhaps through the use of layered privacy notices. 

App developer

Many app developers will use an app for their own purposes and so be considered a controller under the GDPR. But when might an app developer not be a controller? The test is of course that the controller determines the purposes (why) and means (how) of processing. When an app developer collects personal data for their own use, including information provided by the device, they would be considered a controller. But what about when the app collects personal data that will be used by another firm setting the rules for personal data collection and use? The second firm is clearly a controller, and the app developer seems like a processor acting under the instructions of the controller. 

The Working Party's WP169 states that, “A processor could operate further to general guidance provided mainly on purposes and not going very deep in details with regard to means.” As a controller determines the purposes and means, the question then becomes what are “means?” Referring to the history of the DPD, “means” is a shortened designation that “does not only refer to the technical ways of processing personal data, but also to the ‘how’ of processing, which includes questions like ‘which data shall be processed,’ ‘which third parties shall have access to this data,’, ‘when data shall data be deleted,’ etc.”

Therefore, any processor determining these essential elements of the means would be classified as a controller, but not if merely determining the technical and organizational means of processing. Applying these definitions to the design and development of application software, would not designing code fall into the category of “means,” as at a minimum the app developer is determining which data is processed and possibly how long it is kept? This is regardless of whether the app processes this data collected only for others.

The Working Party's WP202 states that app developers “have the greatest control over the precise manner in which the processing is undertaken.” Or does the controller, in deciding which app to purchase and implement, make this decision about which personal data to gather based upon which app it deploys? Are app developers always at least joint controllers with those organizations that use their apps? 

App developers need to understand the personal data they collect and how this impacts their role under the GDPR.

photo credit: ThoroughlyReviewed Legal Contract - Must Link to https://thoroughlyreviewed.com via photopin (license)