TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| Building effective AI through collaboration Related reading: US Department of Commerce highlights NIST's draft AI guidance

rss_feed

""

The need for cross-departmental collaboration when deploying artificial intelligence models is not just advisable. It's essential. As head of data privacy and product compliance at Collibra, it is my job to make sense of the emerging AI legal and regulatory landscape and to interpret its implications for our business. 

But this is not work I can do alone.

I need input from a range of stakeholders to get a full picture of the proposed AI use case — its intended purpose, leveraged data, outputs and a host of other factors. This input is critical for making rational decisions about the use of AI. Furthermore, the decision as to whether an AI use case receives the green light is not one the legal team can make alone. 

Each stakeholder involved in AI governance necessarily relies on the input of the others to collectively decide an AI system's future. At my organization, we use collaboration as a key driver for AI governance. This fundamental concept may seem obvious, but what each stakeholder brings to the table and how the group of stakeholders collaborates is not. 

While we continue to tweak our processes, we have identified the types of issues each team needs to address and the information they must provide to other members of the larger governance team.

Creating the AI governance team

We have been leveraging AI functionality for some time now. However, interest in new AI use cases — for both internal and product purposes — exploded in the last year. We hosted AI hackathons, generated a substantial number of innovative product proposals with AI functionality and received countless requests to leverage new AI functionality within existing third-party tools.

Initially, the intake and approval processes, as well as the nature of stakeholder input, was unclear. AI is "hot" right now (for good reason) and the demand is relentless, but we had to place the influx of these requests on hold to fully understand each use case, create a set of criteria to evaluate the use cases, and identify the right stakeholders within our organization to evaluate use cases, ensure business value and mitigate risk.

While the right model will vary according to each organization's industry, size and structure, our model is informed by our history in data governance. Ensuring the right people, processes and procedures are put in place to assign responsibility and accountability for all data uses is the core of our operation. We lean heavily on the data perspective in our assessments and conversations about AI models.

The stakeholder groups leveraged within our AI governance ecosystem, their respective areas of focus, and their dependencies on other stakeholders for critical information in evaluating AI use cases are described below.

Legal, ethics, compliance

Our legal team, which also serves as the organization's compliance and ethics functions, is responsible for:

  • Confirming the legality of an AI model's use under a myriad of legal and regulatory frameworks across the system's global footprint.
  • Assessing the potential for AI systems to trespass on fundamental human rights or pose other serious ethical concerns.
  • Ensuring internal compliance controls are met in any AI deployment.

At the most basic level, we determine whether the AI use complies with proposed AI legislation, like the EU AI Act, or with existing legislation that captures AI use-case risks, such as the  EU General Data Protection Regulation. From an intellectual property perspective, we address the risks of AI causing IP infringement, third-party license noncompliance and lapses in IP ownership.

On the data privacy side, we determine the legal bases for processing personal data within an AI system (both training data and inference data) and whether the processed personal data has been minimized to meet the use case's needs. We also decide how to track the flow of personal data contained in the training data, inference data and output data sets in our personal data maps and how to address data subject access requests involving personal data used within AI systems.

While proposed AI laws and regulations will do their best to address potential AI risks, we are well aware that ethical concerns related to AI will continue to rise faster than governments can address them via legislation. It is important to look at the unintended long-term effects of AI systems, and whether these systems could trespass on basic, fundamental rights in novel ways.

We must decide whether an AI system's use threatens the equal treatment of protected classes or the freedom of expression of individuals. Serious attention should be paid to the level of automation within an AI system and the ability for human oversight to mitigate potential harms. The AI system owner's accountability for its impact and ability to explain how the AI model operates are critical for an organization to provide complete transparency. 

Compliance teams are looking to adhere to controls and internal policies that meet AI specific compliance standards, such as the National Institute of Standards and Technology AI Risk Management Framework. This requires a careful review of the approval process for the AI system, but also must consider whether the legal and ethics teams actually properly assessed the legal and ethical concerns prior to approving the use of an AI model.

The majority of legal, ethics and compliance professionals are not data scientists. Typically, they do not have the technical ability to determine the type, quality, accuracy and source of an AI system's training and inference data, or the accuracy and explainability of the model. They must work closely with the analytics teams to gather this information. 

Similarly, the business stakeholders must inform these professionals of the purpose for use, the critical decisions that will be made based on the AI model's output, the sophistication of the output's audience, the model's ability to be used inappropriately and other factual parameters of the use case included in the legal, ethics and compliance teams' assessment of the risks.

Data office

The data office, often led by the organization's chief data officer, is responsible for ensuring data from across the organization is highly governed, democratized and transparent to all, and that it drives maximum value. The data office will want to know who needs access to the specific data, how long it will be needed and what the full life cycle of the AI model will look like.

The primary responsibility for this information, however, lies with other stakeholders within the organization. Legal and business stakeholders must agree on access and retention, and the analytics team is responsible for ensuring the data office is supplied with accurate and complete data.

Analytics

The analytics team is tasked with building AI models to meet the business stakeholders' requirements. They focus on the costs and complexities of building and maintaining models, the quality and volume of the training data, the applicability of such data to the use case, the key performance indicators and metrics that ensure investment in the model is paying off, and the accuracy and integrity of the model over its life cycle.

The team is also tasked with building proper observability mechanisms to ensure data, model or concept drift is detected and improvements are implemented. However, before the team can even begin to build the model, they must collaborate with the legal, ethics and compliance teams to ensure:

  • Data used to train and/or fine-tune the model can be leveraged both legally and ethically.
  • Storage, data retention and access rules have been determined for the model, and training data, inference data and output data, and the purpose for using the model have been approved.
  • Legal, ethics and compliance teams are comfortable that the observability mechanisms established sufficiently mitigate legal or ethical risks driven by inaccuracies.

Further, the security and compliance teams will require a review of the project to ensure proper security protocols and additional compliance controls are met when building the model.

Information security

A compromised AI model used in material business functions not only threatens the privacy and confidentiality of the data processed within the model, but the nature of the decisions made based on the output. As a consequence, the information security team will depend heavily on its collaboration with the analytics team to ensure proper security measures are built into the model, as well as contemplated in storage and access requirements. Legal and compliance should weigh in to ensure the imposed security measures meet regulatory standards, as well as internal and external compliance controls.

Business stakeholders

Business stakeholder collaboration with the analytics team to ensure the model reflects the business needs is, of course, crucial. However, the business stakeholder should also be primarily responsible for ushering the use case through the AI governance approval process, following through with all AI governance stakeholders to ensure the use case meets their requirements. They should actively work with each member of the AI governance team to address any questions, concerns or requirements that materialize during each stakeholder's review of the use case and document the requirements for long term compliance. Further, depending on the use case, a variety of different stakeholder groups could be at the table, each with different interests (e.g. same data, different use cases).  All of these interests will need to be vetted through the members of the AI governance teams.

Human resources

Many internal AI use cases involve employee data, or impact employees or recruiting processes. Whenever employee or recruiting data is leveraged, the human resources team will need to understand the impact on employee policies, rights, productivity, morale and well-being.

Starting with the use case

Addressing all these stakeholder interests requires a starting point. Our AI governance team starts with the use case.

To be considered, a use case must, at a minimum, define the nature of the AI model being leveraged, the data used in training, subsequent fine-tuning and using the AI model, the nature of the output, and the purpose of the use case.

As a group, we organize our efforts at this first step in accordance with an agreed-upon use case intake process because it gives us a go/no-go decision on whether we'll even permit the team to access the required data and build the model. If the use case is authorized, the analytics team can begin working through the requirements each stakeholder has established for the use case. 

Once up and running, the team will continuously monitor and verify the model to ensure proper performance. This feedback loop, including returning to the AI governance roundtable to ensure the use case is still valid, is essential until the model is either taken out of production or replaced with a better model.

Collaboration equals AI success

In the tech sector, forging ahead rapidly without getting bogged down by endless internal feedback cycles is celebrated. While nobody wants to see red tape stifle innovation, there are a number of risks and uncertainties associated with the long-term impacts of AI. 

As such, the use of AI requires thoughtful consideration from different perspectives. A collaborative model in deploying AI systems sets an enterprise up for success in this new frontier by not only ensuring business value, but also mitigating risk. Ultimately, obtaining all stakeholder input in advance results in the deployment of better AI technology, facilitating innovation rather than slowing it down.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.