TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Why this risk management best practice is not fit for digital innovation Related reading: Algorithms can reduce discrimination, but only with proper data

rss_feed

""

8, 15

Innovation requires a culture of openness and transparency, where mistakes can be made, dilemmas raised and discussed, and joint decisions about the design of new services and the risks that need to be taken. 

Supervisory authorities around the globe typically consider the so-called “three-lines-of-defense model” as best practice for risk management and internal control. This risk management model is based on a strict segregation of duties. The commercial departments are expected to innovate and ensure compliance for new products and services (first line). The compliance function checks for irregularities (second line). The audit department then reviews, post rollout (third line).

This model, however, is not fit-for-purpose when it comes to digital innovation. In fact, this division of tasks inhibits innovation. Because new technologies are not yet fully regulated, it is difficult to perform a clear-cut compliance check.

Artificial intelligence, in particular, opens up a whole new range of compliance issues, not least in relation to data processing. Large amounts of data are required to train AI, mostly from consumers and employees. EU privacy rules require new technologies that process personal data be developed based on privacy by design. A data protection impact assessment must also be carried out, taking into account the potential impact for individuals and society as a whole (ethics by design). Privacy by design and ethics by design are not about choosing between clear-cut options, in other words, picking between preexisting options A or B, but about developing option C to mitigate the negative impact of a new technology on the individual and society.

Supervisory authorities around the globe typically consider the so-called “three-lines-of-defense model” as best practice for risk management and internal control. This risk management model is based on a strict segregation of duties. ... This model, however, is not fit-for-purpose when it comes to digital innovation. In fact, this division of tasks inhibits innovation. Because new technologies are not yet fully regulated, it is difficult to perform a clear-cut compliance check.

Here is an example of how option C can develop in practice. EU privacy rules require that deploying an algorithm should not lead to discriminatory outcomes. They also require companies applying algorithms for automatic decision-making — for example, automated rejection of a loan application — to provide individuals with meaningful information about the underlying logic and to an explanation of the decision.

At present, however, advanced forms of AI are still a black box — we do not know how algorithms come to their outputs. Innovation is, therefore, required to prevent discriminatory outcomes and ensure transparency and explanation. In fact, innovation at major U.S. tech companies currently is geared toward cracking this black box and developing new de-biasing techniques. BBC News recently reported Google had tackled the black box problem with “explainable AI,” which is expected to be a major competitive advantage going forward. In this example, innovation is at the very heart of ensuring compliance.

As a “first line,” the commercial departments are responsible for developing option C.

However, in my experience, they are often not capable of doing so in practice. This leaves the compliance department with no other option but to reject the innovation. In turn, boards of established companies (often encouraged by consultants) are led to think — incorrectly — that the company must be prepared to color outside the lines to be able to innovate, when, in fact, the innovation needs to be directed at how to ensure compliance. Further, they often hold the belief that far-reaching compliance requirements are a barrier for newcomers, while my observation is that compliance requirements are ultimately not a barrier, but rather become the subject of innovation and disruption themselves. An example of this is that by now it has become clear it is easier for the big tech companies to comply with strict EU General Data Protection Regulation requirements, resulting in a competitive advantage

In my view, responsible innovation is only possible if the relevant compliance experts are part of the innovation team and if teams take joint responsibility for compliance.

My conclusion is that the three-lines-of-defense model prescribed by supervisors is, in this form, not suitable for achieving responsible innovation. Years of controls by the compliance function have undermined the self-learning capacity of the business to make contextual assessments and factor in ethical considerations. Just like muscles, contextual assessments and ethical considerations wither away when not used.

In the words of psychologist Barry Schwartz in his TED Talk: "Moral skill is chipped away by an over reliance on rules that deprives us from the opportunity to improvise and learn from our improvisations." In currently highly regulated sectors, such as the finance, the reflex is that if there is no rule that prohibits something, that same thing is therefore allowed and does not require any moral considerations.

Because new digital services present us with ethical design choices and ethical dilemmas, the new reality cannot be captured in rules. But it is certainly also not a moral-free area either. For example, as the commercial interests of collecting as much data as possible are large, in practice, all tricks available are used to entice website visitors and app users to opt in (or to make it difficult for them to opt out). The design thereby exploits the predictably irrational behavior of people so they make choices that are not in their best interest. A very simple example is that consumers are more likely to click on a blue button than a gray button, even if the blue one is the least favorable option.

Because new digital services present us with new ethical design choices and ethical dilemmas, the new reality cannot be captured in rules. But it is certainly also not a moral-free area either.

Many established companies currently also deliberately make it difficult for consumers to make an actual choice and seem to have little moral awareness of doing something wrong. Meanwhile, if one were to deliberately mislead someone in the offline world, everyone would immediately feel this is unacceptable behavior. Although many of these privacy practices are now under investigation around the world, they have normalized behavior and obscured the view of what is or is not an ethical use of data. Although there are compliance rules, these rules mainly concern a floor that you cannot go under. Compliance does not yet mean you as a company also take social responsibility and create long-term value.

The business itself, with the help of compliance experts, should develop these practical skills, challenge itself to develop the self-discipline to set the right moral course, and, at the same time, have the discipline to quickly recognize mistakes and stop them.

In established companies, failure is often seen as a weakness. By contrast, tech companies see this as a strength, provided it is quickly acknowledged (“fail fast”). The CEO of Amazon systematically calls his company the “best place in the world to fail.” (A personal note here is that celebrating failure in these tech companies does not mean that poor performance is accepted.) In these companies, celebrating failure is done by the absolute best of the best in a corporate culture where there is only room for excellence.

This is not something that can be copied easily, and it requires compliance officers to get out of their comfort zone. Often, the warning with AI is that it will require our workforce to “re-skill” and “up-skill.” For compliance officers, this means they will no longer check for irregularities, but rather collaborate with the business and think out of the box about new technical solutions to ensure compliance.

Current compliance departments are not equipped for this. Supervisory authorities will have to adapt, too.

Photo by Christopher Burns on Unsplash


Approved
CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT
Credits: 1

Submit for CPEs

6 Comments

If you want to comment on this post, you need to login.

  • comment Amalia Barthel • Jan 27, 2020
    An EXCELLENT article!!!
  • comment Daniel Wu • Jan 31, 2020
    Fantastic piece. I love it because you're raising critical questions about how lawyers can empathize with other goals in the company -- and improve their practice in new ways.
    
    Have you seen any specific case studies of companies that do this well - especially with the way they improve their "moral skill"? 
    
    Your points also remind me of how big financial services companies are very intentional about working with third-party vendors to help them navigate new regulatory demands better. These vendors are often called "legaltech," "regtech" or "fintech."
  • comment Steluta-Lavinia Puflea • Feb 4, 2020
    Perfect timing on this article. I find it very useful! Thank you!
  • comment Matheus Sturari • Feb 5, 2020
    Amazing content and provoking questions! Thank you for sharing.
  • comment Cheryl Rego • Feb 10, 2020
    This is such a brilliant article. I think most regulation has become a tick box exercise. When was the last time something amazing came out of  a highly regulated industry. However most start ups are not restrained in this way. I would also add to this that the risk heat map is somewhat out of date - privacy harms need to be contextualised.
  • comment Jurgen Otten • Aug 14, 2020
    I like this article and it perfectly explains why I got to work in the privacy space. It requires a lot of innovative and out-of-box problem-solving to create value add for the business. As privacy practitioners, we are confronted with interpreting laws that are largely written in an era where the technology that we have to assess did not exist. I will certainly use this article to further promote and build my organization's privacy brand!