TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | The case for appointing an 'AI custodian' for AI governance Related reading: IAPP releases AI Governance Professional Body of Knowledge

rss_feed

""

With the growing importance of artificial intelligence governance and legislation in the EU and elsewhere, there will be many practical questions regarding the implementation and day-to-day management of AI systems in terms of compliance and trustworthiness.

Current frameworks, such as the U.S. National Institute of Standards and Technology's AI Risk Management Framework, and guidelines, such the European Commission's High-Level Expert Group on AI, clearly describe certain characteristics of trustworthy AI systems.

The frameworks and guidelines use similar concepts but employ different terminology. Such characteristics, which need to be in place, underscore that AI systems should respect human rights; be ethical, robust, reliable, safe, secure, resilient, accountable and transparent; avoid bias and ensure nondiscrimination.    

These documents also, to some degree, provide indications regarding the implementation of AI systems. They are, however, always high-level, which is typical for these kinds of documents. One example is the NIST RMF provision that processes and procedures must be in place to determine the needed level of risk management activities based on the organization's risk tolerance.

Still, a general consensus prevails: AI governance and management, in terms of compliance and trustworthiness, should involve a broad set of internal and external actors and stakeholders with varying degrees of importance, impact and involvement in the different stages of AI creation and deployment. Stakeholders can include designers, developers, scientists, procurement, staff that uses or works with the AI system, legal and compliance, management, third parties, end users, civil society organizations, and so on.

A strong case was recently made, for example by the IAPP, for privacy teams to lead management of AI governance and compliance.

While there are many good reasons to follow this point of view, it is important to remember privacy programs involve more stakeholders than just the privacy team. More specifically, staff are appointed to be responsible for certain activities or processes involving personal data as process owners or leaders. This involves contacting privacy teams, following their guidance, managing the activities from a privacy perspective and engaging other relevant stakeholders.

Consequently, a similar, but not identical, approach will be needed to ensure effective governance of AI in terms of compliance and trustworthiness. From this angle it is important to highlight, as explained in the NIST framework and various other documents, that AI systems are inherently sociotechnical in nature. This means we cannot simply speak about them as any other app or tool with an app owner. AI systems will be broader in scope and potential impact and, thus, larger perspective is required.

Given the many specific requirements of AI system compliance and trustworthiness, it makes sense that someone like an AI custodian is appointed for the role.

That same person can manage many systems, as long as he or she has a sufficient support team and expertise to manage them at the same time. An AI custodian does not necessarily need to be technical expert in AI. They should, however, understand the technology basics, as well as details related for the specific use case. At the same time, they need to know how the organization's AI compliance program works and be familiar with the latest developments regarding AI legislation and frameworks.

So, what are the main responsibilities of an AI custodian?

An AI custodian would be responsible for a wide set of day-to-day governance and compliance requirements, and for involving more specialized functions as needed. While all this will remain to be seen in practice, some specific responsibilities are easy to foresee.

First of all, AI custodians would be responsible for making sure AI systems are entered into an internal AI system inventory with accompanying information related to its source, usage and basic technical details.

Second, they would lead an internal assessment and support the implementation of an AI system's compliance from the moment participation is reasonably possible, given that some systems can be internally developed. Most, however, would be externally acquired, both from vendors and open-source. 

Assessments should include impact on human rights, human oversight, safety, privacy, transparency, nondiscrimination, as well as social and environmental well-being. Obviously, many actors familiar with the systems and functions — as in security or compliance — would contribute to an assessment. Such a multitude of stakeholders, though, provides an even stronger reason why a specific person is needed to make sure the process is completed and managed in a timely and efficient manner.

As assessments and implementation are only a starting point, and many activities need to be repeated, there is are number of specific steps and actions required for day-to-day AI governance, explained in detail in other publications. For this reason, maintaining compliance and managing risks should also be placed to some degree in one person.

This person would still, from a wider perspective, be supported by an AI compliance team and by an ethical committee for most important decisions.  AI custodians should be responsible for compliance from the system's creation to its decommission, while some activities (such as rights to redress) might need to be managed even longer. When put together with the NIST RMF, the AI custodian would need to be actively involved in all functions — from governing, mapping, measuring and managing — but their most active role would be related to management. This involves determination as to whether the system is achieving its intended purposes and stated objectives, treating documented risks, asking for additional resources, monitoring risks and benefits, measuring for continual improvements, and communicating incidents and errors to relevant stakeholders.

It would be reasonable for AI custodians to assess the risks and make risk-based decisions on a daily basis, but this should be limited to low and medium risks, depending on an organization, while high-risk decisions would need to be escalated to a higher level in an organization, with active participation of an AI compliance team. For the most impactful topics, those should be discussed with the relevant ethical committee. 

Establishing AI custodians, as a full-time or part-time function is, for the moment, only one of many possible ways to tackle the difficulties of AI governance. It seems, however, plausible that this will be something seriously considered to augment and support the teams responsible for AI compliance, which might be the current and existing privacy teams for many organizations.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.