TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | How to manage privacy and AI risks within the same project Related reading: When is AI PI? How current and future privacy laws implicate AI and machine learning

rss_feed

Dealing with privacy risk has long been considered a necessity during project and process management, the same as considering technological risk. As artificial intelligence grows in importance and relevant risks become apparent, methodologies, frameworks and regulatory initiatives emerge — both from the private and most recently the public sectors — to ensure ethical, societal and regulatory requirements are in place when managing AI risks. This includes managing privacy risks resulting from the development, implementation and use of AI.

Some examples include recent developments in the EU, with the European Commission’s proposal of the first ever legal framework on artificial intelligence, which, similarly when the EU General Data Protection Regulation was issued three years ago, set the tone for other countries to follow, including emerging nations exponentially growing as major technology players.

In this regard, for instance, Brazil — which is one of the emerging AI markets and made its first data protection regulation, LGPD, effective in 2020 — is already a pioneer in the South American regulatory framework and also published a National Artificial Intelligence Strategy to serve as the country’s foundation for ethics and AI.

Apart from the governmental initiatives to regulate AI systems — most of which are still under development — the private sector led by the Big Tech companies such as IBM, Google and Microsoft already established their own ethical-AI regulatory framework in the past years. Because those companies are innovation hubs and have a major influence on how technologies are developed, it is easy to address the similarities between Big Tech’s preconized policies and the content of the aforementioned transnational AI regulations.

It is now clear privacy and AI risk management will more and more often be part of the same projects and processes, and such principles need to be observed both from a security and ethical perspectives. Such risks, though overlapping, are of course very different in consequences, scope, risk factors and potential areas of concern. At the same time, the process of risk management needs to be effective, productive and reasonable to make sure it is followed and technology can grow for the benefit of all.

Some of the practical considerations and advice should be based on simple and iterative steps backed by experience and a good deal of common sense.

Given the synergies between the parallel developments of both the privacy and AI regulatory guidelines across the world and industry sector, the main guideline to be observed to make sure companies create and deploy trustworthy and privacy-friendly AI solutions, as per the general consensus and much like the GDPR and the LGPD, include the principles of human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, nondiscrimination and fairness, social and environmental well-being, and accountability.

As a first step, it’s important to embed those principles — which resemble the GDPR’s and the Human Rights Declarations’ — in any project or initiative involving AI systems from their design. With the regulation’s guidelines in mind, it is crucial to clearly understand what you aim to achieve and the impact it is meant to create. Then, you need to thoroughly identify the main ethical, privacy, economic, societal, environmental and legal considerations based on the business area and type of product, project or process.

Second, an already well-known process to access privacy risks, the data protection impact assessment questionnaire, can be slightly modified to include technical questions related to the way the AI system operates in its algorithm-based level, how the machine learns and how it gets from the input data set to the desirable outcome. This is the perfect moment to assess whether your system may generate bias, undermine fundamental rights or pose threats of damaging data subjects’ rights from a privacy standpoint.

Furthermore, the assessment phase must be extended until you can test how the AI system — often called a “model” — will react when processing a real data set, whether it will accomplish the desired purposes, whether the delivered results are accurate, trustworthy, and free of bias and discrimination towards a specific scenario or group of people.

This testing exercise can be performed by deploying the AI tool in a low-risk and controlled scenario in order to calibrate how the algorithm may react to more complex and higher risk projects, guaranteeing, in practical means, that all ethical AI principles are met, especially those related to transparency, fairness and nondiscrimination. Whenever the model delivers a result — output — that damages any of the ethical AI principles, it’s on the privacy, security and data science development teams to partner up and remediate the results or even the way the AI tool is built and learns.

As an illustrative example of an AI remediation, one could tackle a gender bias found after the assessment and testing phases by changing the data set fed to the AI system to include more diverse training data with information about women and nonbinary people, which will lead the machine to learn in the expected direction and improve the results’ quality.

Based on the steps above, you can also find out who would be potentially the main internal and external stakeholders as well as subject matter experts you will need to involve. Depending on a rough, initial business case and risk estimation, you might decide to go forward with internal resources or seek external expert consultancy and implementation services, including by using governmental or NGO resources or open initiatives.

Finally, due to the fact that AI systems are capable of training and teaching themselves how to behave differently as they gather larger amounts of data under a lifelong learning approach, not only do privacy officers need to constantly monitor the global regulatory landscape in search of guidance across the seas of innovation and cross-border challenges, but professionals eager to work in the AI compliance field must constantly assess the substantial changes in the machines’ learning curve and predictiveness in order to mitigate risks. Whether the private and public sectors will be able to develop their governance in the same pace as the algorithm popularizes is still unknown.

Photo by Adi Goldstein on Unsplash


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.