Among the many conversations taking place at the IAPP Europe Data Protection Congress 2024 in Brussels is the complexity involved when using legitimate interests as a legal basis for processing personal data in the EU.

Ireland's Data Protection Commission fined LinkedIn 310 million euros in October based on three findings involving legal bases for processing personal data for behavioral analysis and targeted advertising for LinkedIn users. The case was originally brought to France's data protection authority by La Quadrature Du Net in 2018.

In the three findings, the DPC said LinkedIn did not validly rely on consent to process third-party data of users; contractual necessity to process first-party data; and legitimate interests for processing first-party data or third-party data for analytics.

Speaking during a DPC breakout session in Brussels, DPC Commissioner Dale Sunderland offered more specifics behind the agency's findings for the legitimate interest portion of the enforcement action.

"We will be publishing this decision in the coming period, but at this point, this information actually hasn't been published, so I hope it will be of some use to you all," Sunderland said.

The DPC's findings on legitimate interest

More specifically, Sunderland said user data was provided "directly to LinkedIn in their profiles, and inferences that LinkedIn drew from information members provided on their profiles, such as gender or age, their actions taken on LinkedIn and actions taken by similar members. LinkedIn then placed that membership into segments and interest categories based on that data." He added the segments and categories included personal information like age, location and skills.

The DPC analyzed the findings against the three-part test set out by the Court of Justice of the European Union to assess a legitimate interest as a legal basis, which was detailed in a decision last month involving the Royal Dutch Tennis Association and VoetbalTV. The CJEU's test for assessing whether a legitimate interest can be used as a lawful basis includes that it should be pursued, the need to process the data and the fundamental freedoms and rights of the user are not outweighed by legitimate interests.

According to Sunderland, LinkedIn sufficiently passed the first two prongs of the test. "We found that the target of advertising carried out by LinkedIn helped its customers target individuals with more relevant jobs and ads, which in turn generated an income," he said.

For the second prong, the DPC "found that the processing was necessary for the pursuit of those legitimate interests." And though the DPC said LinkedIn could have used "less intrusive ways to pursue both its own interests and those of its members and third parties. ... we accepted that LinkedIn had demonstrated at the time of the inquiry there were no less restrictive means of achieving the interest in question that could equally effectively achieve the aims pursued."

It was the third prong of the test, however, that "LinkedIn failed, in that legitimate interests were overruled by the interests and fundamental rights" of the data subjects.

Sunderland said the DPC acknowledged that LinkedIn contributed to "a number of of positive public interests, such as allowing employees and job seekers to advertise their skills ... providing opportunities for upskilling, matching job seekers and job opportunities and a number of others."

However, the agency "identified a range of negative impacts on individuals and this included the wide range of inferred categories of data and a particularly concerning possibility that in the professional context, an individual could be targeted, or more problematically, excluded from job advertisements based on inferred data that would be inappropriate to consider in a professional context, such as gender or age, and there are also a larger number of segments and interest categories." He also noted potential inferences "could be incorrectly segmented."

Ultimately, for the third prong of the test, the DPC concluded that it "was not within the reasonable expectation of users in that particular context."

"I think the really important point here," Sunderland said, "is what we want to see is that for legitimate interest to be appropriate and correctly used, the assessment needs to be really robust, and it needs to really tease out the choices that have been made by organizations to ensure that the legitimate interest" of the organization do not outweigh the rights and freedoms of the individual.

Assessing legitimate interests in a complex digital environment

With added regulatory complexity and rapidly developing technology, assessing legal bases for processing personal data is an issue companies around the world face each day.

To help navigate this space, the Information Accountability Foundation recently published a legitimate interest assessment "for an AI World." The 74-page document, which was released earlier this month, includes a "draft model legitimate interest assessment" that was informed by "global businesses, regulators, civil society, academics, and nongovernmental organizations."

Touting the IAF's work, IAF Consultant and Jersey Data Protection Commissioner Elizabeth Denham said, "I want to call out the complexity for businesses right now. A lot of businesses I've talked to are absolutely exhausted depending on the day in thinking about the multitude of assessments that they needing to do." She said the number of assessments — from privacy impact assessments, human rights impact assessments for AI, among others — drove some of the IAF's work on the project "to see if we could find a delta between all of these different types of assessments to make it smoother, easier and more transactional."

Mastercard Chief Privacy and Data Responsibility Officer Caroline Louveaux, CIPP/E, CIPM, said her company "has been leveraging the legitimate interest for a very long time out of necessity. If you think about it, we have decades of experience leveraging data and AI to enhance the security of our network."

As an example, Louveaux noted that Mastercard uses data in AI "to verify that you are who you say you are before you make a transaction to detect normal for abnormal transactions and to basically protect all of you from fraud and massive cyber attacks."

Louveaux said Mastercard is "constantly innovating to enhance the security and safety of the network," which, in turn, has helped enhance the speed and the accuracy by up to 300%.

In order to do this, she said, Mastercard must rely on legitimate interest, noting that consent in this context is not an option. "It's not workable in practice and it's not desirable." Performance of an agreement "is not an option either because Mastercard does not have direct relationship with consumers," but rather banks.

Louveaux said that over the years, the company has developed a robust legitimate interest assessment to meet the CJEU's three-pronged tests and highlighted that companies face increased risks, interests, rights, freedoms that all must be balanced.

"That becomes really complicated."

Complexity and the need to engage with regulators

Denham, Louveaux and IPG Interpublic Group Global Chief Digital Responsibility and Public Policy Officer Sheila Colclasure, all of of whom joined the DPC's Sunderland for the breakout session, recognized the growing need for leveraging legitimate interest and helping to educate industry, lawmakers and regulators of its utility. Outside of the EU, many jurisdictions rely more heavily on consent and do not have a legitimate interest provision. But digital complexity and innovation are moving beyond the traditional notice-and-consent paradigm.

Colclasure said it is key for businesses to engage with regulators. "We are all familiar with the speed and scale of change today, especially with AI, which is changing things even more quickly than I would have thought."

Innovation is happening, she said, "and personal data, in many instances, is needed for innovation," but it often comes quickly. "The collaboration with regulators to get their thinking where ours is, to have them look at what we see, at what we're innovating so that we can identify any gaps before they happen" is paramount.

But, she added, "I don't think any of us want to surprise a regulator."

With a smirk, the DPC's Sunderland replied, "Regulators like surprise limitation whenever possible."

Jedidiah Bracy is the editorial director for the IAPP.