TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| Establishing governance for AI systems Related reading: Key components for defining AI governance in specific use contexts

rss_feed

""

While various countries are debating the regulation of artificial intelligence, few have implemented any plans. As the race to develop AI tools intensifies, organizations are grappling with the need for governance structures that can manage risks without stifling innovation. Amid this landscape, organizations have realized the urgency of creating a framework for direction, control and monitoring of AI tools.

This is where challenges arise. How can organizations build AI governance structures that mitigate risks, foster innovation and allow for easy updates to accommodate future regulations? While there is no one-size-fits-all solution, the following foundational steps could be useful for various organizations.

Step 1: Identify the focus of the governance program

The design of an AI governance program focused on the development of new systems is different from one aimed at evaluating the implementation and use of third-party systems.

Those involved in the development of new AI systems must focus governance on designing solutions and implementing measures that guarantee the security of the tools. This includes considering privacy by design, setting up robust ethical guidelines and eliminating discriminatory bias.

Simply implementing these measures will not be enough. Everything must be documented for accountability purposes and to provide the necessary public transparency regarding the functioning of the new system, without implying the revelation of commercial secrets.

On the other hand, those using tools and systems based on AI developed by third parties should concentrate governance on establishing clear guidelines for acceptable use and limitations, as well as policies on the evaluation of whether those service providers took steps to avoid the risks mentioned above. Furthermore, it is essential to provide training and awareness to those who will handle these tools, in addition to promoting constant monitoring to verify compliance with the established rules.

Depending on the types of functionalities employed, such as in the case of generative AI, those using AI tools should consider incorporating a human review process to assess the generated outcomes. This step is crucial to ensure the results align with the company's values.

Step 2: Map the AI systems in use

Just as building a record of all processing activities in a privacy program is essential, the same can be said of mapping AI systems for an AI governance program. Mapping the tools in use — or those intended for use — is crucial for understanding and fine-tuning acceptable use policies and potential mitigation measures to decrease the risks involved in their utilization. Consequently, it enables the establishment of governance that aligns with organizational goals.

However, unlike mapping data processing activities, those responsible for AI systems tend to be more centralized, making their identification easier. When conducting this mapping, essential information to capture includes:

  • System type: Internally developed, third-party or service with AI embedded.
  • System name.
  • Owners and stakeholders: Internal owner for third-party and AI-as-a-service systems.
  • Users.
  • Data sources: Where the data comes from and its type (structured, unstructured).
  • Dependencies: Other software and hardware dependencies.
  • Access points: Application programming interfaces, user interfaces and other integration points.
  • Functional description: What the system is designed to accomplish.

Step 3: Define or design the governance framework

Before choosing one or more AI governance frameworks to guide the structure of a program, it is necessary to identify who the users of the governance program are. These users will fall into two distinct groups: the first handles development or implementation, such as professionals with the technical skills to understand AI. The second group handles governance, and is formed by senior leadership and professionals capable of evaluating and managing the impacts and risks associated with the adoption of AI.

In keeping with a project developed by the Center for Security and Emerging Technology at Georgetown University, which provides guidelines on selecting an AI governance framework, it is important to identify the the governance program's stakeholders.

These stakeholders also fall into two distinct categories. The first consists of those responsible for development or implementation, that is, technically skilled professionals. The second group, responsible for governance, is formed of senior leadership who are capable of evaluating and managing AI-associated risks and impacts.

After identifying the users and composing the governance structure, the organization will need to clarify the scope of the "utility" of AI in its operations. This means all applications of use or development must be catalogued with a description of functionalities and their lifecycles for the construction of solution diagnostics, hence, the importance of prior mapping.

Answering these preliminary questions is important because of how AI governance frameworks are constructed. The majority focus specifically on either the first or second stakeholder's category, so it may be necessary to combine more than one model. From the standpoint of the AI system's scope and utility, some frameworks are focused on specific parts of the lifecycle — development, training, implementation, etc. — or on specific features, like explainability.

An ongoing commitment

By adopting these three steps, it is possible to lay the foundation for a robust AI governance program that is flexible enough to absorb future regulations, especially risk-oriented laws and those focused on accountability.

A successful AI governance program is an ongoing commitment that needs a dynamic approach. By initially defining the governance program's focus, conducting comprehensive mapping of AI systems and developing a customized framework, organizations can both mitigate associated risks and cultivate innovation. This process should be adaptive, factoring in advancements in AI technologies and emerging regulatory landscapes. This multidisciplinary approach ensures comprehensive oversight, covering both the functional and ethical implications of AI.

As AI increasingly infiltrates diverse sectors, the imperative for a rigorously defined governance program is clear: It serves as a cornerstone for the responsible and ethical use of AI.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.