TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | What privacy frameworks can teach us for implementing AI Related reading: The disciplines of modern data privacy engineering

rss_feed

""

Artificial intelligence is definitely one of the most important — if not the most important — technology that will shape our world in the years to come. But with opportunities come obviously risks and challenges. Not surprisingly then, numerous efforts are being made to create frameworks and standards that will help reconcile benefits and potential problems we might face.

While all these efforts involve multiple disciplines — and naturally, privacy professionals have a big role to play to make sure privacy considerations are part of the process — there are many other and more universal observations, based on the history of implementing privacy, that can be used. This is even more apparent as many look for inspiration from privacy frameworks, principles, assessments and design principles to create something similar for the AI domain.

At the same time, to prevent amplifying certain problems and avoid repeating the same mistakes because of the very nature of AI, we should carefully consider some of the lessons learned.

Why we need to keep it simple?

Simplicity is what makes a key difference with the kinds of high-level principles that underpin almost all privacy laws, regulations, frameworks and standards, as well as the diverse regulatory and cultural landscape. To implement privacy, clear do's and don'ts are needed.

Only at the next stage it is practical to consider local legal and cultural differences to tailor the program and implement local process variations. When implementing privacy, simple, incremental steps based on rules that are universal and intuitive, such as keeping personal data confidential and secured, pay off and bring efficiency, efficacy and cost optimization.

This also makes sense for an internal rulemaking process, trainings and when considering internal and external communication. Not surprisingly, the EU General Data Protection Regulation itself says that the principle of transparency requires that any information and communication relating to the processing of personal data be easily accessible and easy to understand and that clear and plain language be used. This means no legal, convoluted and high-level language, but instead clear and plain rules and concepts are needed to communicate with individuals. And the same would be true, as some say, for communicating and engineering the AI.

While we are still struggling to clearly spell such dos and donts in the privacy domain — and to an extent, we rely on a common sense and organizational maturity when dealing with certain issues — we might not have the same comfort when dealing with AI, which probably requires very specific and logical rules with little or no exceptions. They should include preventing damage and harm to humans, animals and natural environment, as well as triggering human involvement and human decision-making whenever such harm or damage seems to be unavoidable.

This certainly seems to be easier conceptually to comprehend and implement than is the overall notion of ethics or when legal requirements are at play.

How to implement a risk-based approach?

What we definitely learn from privacy programs is that it's not practical to put the same level of effort and caution to each and every processing activity. This would result in creating bottlenecks — thus preventing business and technology to grow — or taking a shallow approach with checklists and controls being applied in a very superficial way. Or both.

Being able to identify low-risk activities and use cases with little restriction is crucial to spur innovation and make sure there is no disadvantage when compared with other regions and jurisdictions that must take a different approach. The risks might be also lower when the solution is being used to augment but not to replace the decision-making process in its entirety.

At the same time, for high-risk activities and use cases, such as in health care or public security, self-assessments might be not sufficient, and a defined regulatory process with public stakeholder participation is important.

There will remain many situations in between, so some type of self-assessment mechanism, similar to privacy impact assessments, with limited regulatory involvement and reasonable amount of public scrutiny, would definitely help. Privacy engineering objectives, such as defined by the U.S. National Institute of Standards and Technology, and especially predictability and manageability (in addition to confidentiality, integrity and availability, essential for the information security) could be also very relevant for the AI framework (with the obvious difference that they need to refer to the entire solution workings and not to how personal data is managed only).

How to remain agile and open to change?

When implementing the GDPR in recent years, many people have said it is a constant struggle, and they've been learning while going forward with program implementation at the same time. And this will be much more true with AI-based solutions, some experts say. This is why transparent and documented processes are needed. But at the same time, the requirements and level of effort need to not be too burdensome so that changes and modifications may be made as learned. This, in turn, would create much more risks and problems than avoiding some formalities at the initial stage. Also, with a general notion that AI needs to be used in an ethical way, we need to be weary of how society evolves and ready to improve ethical standards in the future, mitigate potential new harms and risks without exceptional effort and without a need to reinvent the technology from the beginning.

All in all, implementing privacy and creating AI are two different topics to a large extent. Challenges and issues might be completely different, and we certainly have been dealing much longer with privacy issues than with AI and its consequences. On the other hand, when creating principle based frameworks, to reconcile societal expectations and potential risks, while spurring innovation and economy growth, privacy is the best source of inspiration. It also means we should use the lessons learned and avoid the same mistakes as we take steps in this exciting journey. 

Photo by Owen Beard on Unsplash


Approved
CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.