TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| Regulators' rulebook for AI: Bit by bit Related reading: Key Terms for AI Governance

rss_feed

""

With countries around the world working to develop artificial intelligence regulations, organizations will soon face a wave of new AI governance rules.

But, what do regulators expect today?

Most countries do not yet have omnibus or dedicated AI laws on the books. China is a massive exception, with its Interim Measures for the Management of Generative Artificial Intelligence Services entering into force 15 Aug. But even they are billed as interim.

Canada's AI and Data Act, part of Bill C-27, is progressing, but it leaves most practical details to regulations yet to be developed and implemented over a two-year time frame, meaning a full picture of the law's requirements is still a ways off.

The EU AI Act spells out far more details in its text and its ongoing trilogue negotiations could wrap up by the end of the year. Still, that rulebook is likely to include a two-year implementation period for most company obligations.

In the U.S., we have the National Institute of Standards and Technology AI Risk Management Framework, a voluntary framework intended to guide organizations in the trustworthy design, development, use and evaluation of AI products, as well as the White House Blueprint for an AI Bill of Rights, which offers a glimpse into the Biden administration's aims and expectations, but not hard requirements that regulators can enforce.

So, where does that leave us in the interim?

The U.S. Federal Trade Commission answered that question, as have a growing number of regulators across the globe. In keynote remarks at the IAPP Global Privacy Summit 2023 in April, FTC Commissioner Alvaro Bedoya was clear. "First, generative AI is regulated," he said. "Second, much of that law is focused on impacts to regular people. Not experts, regular people. Third, some of that law demands explanations. 'Unpredictability' is rarely a defense. And fourth, looking ahead, regulators and society at large will need companies to do much more to be transparent and accountable." FTC Chair Lina Khan reiterated his message in her New York Times op-ed shortly thereafter.

As regulators around the world launch investigations into AI systems and publish guidance for organizations, drawing on existing laws and relevant fields, their questions, statements and actions offer us insight into regulators' expectations. While much of their thinking has yet to be tested in court, these investigations offer a sense of where regulators are focused and the questions organizations should ask themselves.

By the time hard laws are enacted, organizations will have poured hundreds of billions of dollars into AI systems development and integration. It is well worth building those systems and investing in AI governance structures with regulators' questions in mind.

In some cases, acceptable or even feasible answers are not yet clear. Technologies and governance structures are evolving quickly. Knowing what questions are being asked and developing an AI governance program that allows your organization to respond is a critical step. Rebuilding or dialing back the clock on algorithms when faced with enforcement actions will be far more costly.

But what are regulators asking today? What questions should organizations be prepared to answer?

Risk assessment and mitigation

Ultimately, regulators' questions focus on the risks posed by AI across a range of domains and the steps companies have implemented to address them. Questions about risk assessments are therefore a significant feature. These may include:

  • Have you conducted an AI impact assessment?
  • What policies, procedures and people do you have in place to assess AI risk and safety?
  • Who is involved in assessing AI risks and whether they have been sufficiently mitigated for product release? What are those individuals' roles, reporting structures, titles, departments and relevant expertise?

The FTC raised extensive questions in this regard, and many other regards, about staffing. This may reflect the long-recognized adage that people are policy and process — without professionals to implement them, policies are meaningless.

  • What risks did you take into consideration?
  • What risk mitigation measures did you implement?
  • What methods did you use to train or retrain your models?

. For AI developers specifically, retraining models is often core to risk mitigation. Methods of training, oversight of training and training of trainers also featured in regulators' questions.

  • In what instances was your AI model or product released after the risk assessment?
  • In what instances was the AI model or product not released after the risk assessment?

Regulators are also asking for an evidentiary basis for responses to the following and all other questions.

  • What policies and procedures do you have in place regarding testing and monitoring AI models or products throughout the development lifecycle?
  • What documentation can you provide regarding the results of testing and monitoring for risks identified and resulting mitigations?

Transparency and explainability

EU, U.K. and U.S. regulators have demanded transparency regarding data processing by AI systems and accessible explanations regarding how the systems work. The U.K. Information Commissioner's Office asked "How will you ensure transparency." As other regulators' communications have demonstrated, that question contains many subquestions, including:

  • Where is information about your data processing activities, including the logic of your processing? Is it publicly and easily accessible?
  • How is information on your data processing communicated to individuals?
  • Where is information concerning data subject rights communicated publicly?

Italy's data protection authority, the Garante, went even further, calling for an awareness-raising campaign "through radio, TV, newspapers and the Internet" to "inform individuals on use of their personal data for training algorithms." The FTC went further too, asking about testing and documentation to substantiate assertions and suggesting organizations should consider building this in from the start. 

  • Have you conducted or relied on testing or research assessing individuals understanding of your AI systems' methods, accuracy, data retention and use, and the disclosures you make about them?

Sourcing data

Regulators and policymakers alike have focused attention the data sources used to train AI models, considering a range of factors, including accuracy, representative nature, biases and legality.

  • What data sources are you using to train your AI model?

Whether or not this question is paired with additional subquestions, it does have some baked in:

  • How or from whom did you obtain the data?
  • To what extent are you relying on scraping publicly available data from the Internet to train your model(s) and from which websites?
  • On what legal basis are you collecting, using and retaining the data from the sources you have identified?
  • To what extent do you assess the content of your source data by manual or automated means?
  • Is your source data representative, unbiased and appropriately scoped for the intended uses?

In advisory guidelines on the use of personal data in AI, Singapore's Personal Data Protection Commission recommends addressing the question above in disclosures, to improve understanding of the quality of the training data set and steps taken to improve model accuracy and performance. The disclosures should also answer the following:

  • Does your source data contain copyrighted information?
  • Does your source data contain personal information?

The sections below cover related questions about the risks that answers may raise related to privacy, bias, intellectual property protection and other concerns.

Data protection

Data protection regulators around the world have been some of the first to launch investigations of AI-based products and services, drawing on their experience and the more-established privacy rulebook. European regulators and their U.S. counterparts were quick to point out that existing data protection laws apply here too. Those processing personal data to develop models or use AI products could face many privacy-related questions, including the following.

  • What types of personal data is your organization collecting, using, retaining or transferring?
  • For what purposes are you using each type of personal information?
  • How long is each type of personal data retained?
  • What is your legal basis for processing personal data, whether received from third parties, data scraping or from individuals' use of your AI-based product?
  • Have you conducted a data protection impact assessment?
  • Who within your organization can access personal data generated from individuals' use of your AI-based products?
  • Are you processing high risk or sensitive data, including biometric data or children's data?

The Garante demanded age-gating to prevent individuals under 13 from using the service, as well as individuals aged 13-18 when parental consent is not available. Others have addressed the issue more generally. This answers the following question:

  • What processes do you have in place to remove, filter, anonymize, obscure or otherwise prevent the inclusion of personal data in your training data?

One challenging area is covered by the following question, which remains largely unresolved in a field where larger and larger data sets are viewed as the key to unlocking the future potential of AI. The question above, however, might point to one piece of the puzzle.

  • How are you limiting processing to what is necessary, adequate and relevant for the processing purposes?

Regulators have focused extensive attention on large language models, raising many questions about accurate and inaccurate outputs about specific individuals. These questions, like others, may have more relevance to some types of AI products and less to others.

  • When are outputs related to specific individuals allowed or disallowed?
  • If an individual is considered a "public figure," does that influence whether AI-generated outputs on that individual are allowed?
  • What policies, processes (automated or manual) and people do you have in place to assess, test and monitor for outputs (accurate and false) related to specific individuals?
  • What mitigation measures have you implemented to reduce the risk of accurate and false outputs related to specific individuals?

Security

As expected, regulators are not only concerned with risks that arise through intended uses, but also through unintended uses and attacks. 

  • How are you identifying and monitoring for security incidents, including leakage of personal data, model inversion, data poisoning, prompt injection and other attacks?
  • What measures have you taken and documented to mitigate security risks?
  • In what instances have you been the subject of attacks resulting in the loss or exposure of personal information?
  • With regard to identified attacks, how many individuals were affected and which types of information were placed at risk?

Individual requests

Regulators are asking how individuals can engage with companies building and deploying AI products. Many of their early questions focus on how individuals can act on their privacy rights.  

  • What policies and procedures do you have in place to respond to individual complaints concerning personal data processing or AI-based generation of accurate or inaccurate personal information?
  • How many complaints have you received regarding the AI-based generation of false, misleading or disparaging information about specific individuals?
  • Which individuals or roles are responsible for establishing, implementing and monitoring policies related to individual complaint handling?
  • What mechanisms do you offer individuals to opt out of the collection, retention, use, analysis or transfer of their personal information?
  • What tools do you provide to individuals to allow them to access, rectify or erase personal data used to train AI models or generated as outputs of AI-based products?

In its communications, the Garante recognized the technological challenges in this area and the steps taken "to reconcile technological advancements with respect for the rights of individuals," and expressed hopes for continued efforts.

Regulators are asking about individual requests and concerns that go beyond privacy as well, with questions like:

  • How many complaints have you received in response to specific risks and safety issues identified in system cards?

Automated decision-making, discrimination and civil rights

Laws related to automated decision-making, antidiscrimination and civil rights offer regulators a clearer rulebook to draw on.

The EU and U.K. General Data Protection Regulations provide, with some exceptions, that "the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her." European regulators have established a track record of enforcement in this area, as documented by a 2022 Future of Privacy Forum report.

In the U.S., regulators are focused on algorithmic decision-making in specific contexts where civil rights laws demand explanations and prevent discrimination. These include the Consumer Financial Protection Bureau, the Departments of Justice and of Housing and Urban Development, which jointly filed a statement of interest in a relevant court case, and the Equal Employment Opportunity Commission. The EEOC included AI-based discrimination in its 2023 strategic enforcement plan, and published technical assistance documents on its relevant authorities under the Civil Rights Act and the Americans with Disabilities Act. The California Privacy Protection Agency is also empowered and working to issue regulations concerning businesses' use of automated decision-making.

Regulators are leveraging these rulebooks to both ask and answer related questions.

  • Do you use AI-based products to make automated decisions with legal or similarly significant effects or which could result in adverse actions against specific individuals?
  • What information do you provide individuals regarding the logic and specific reasons for such automated decisions?
  • What options do you provide individuals to consent to or opt out of purely automated decisions?
  • Do you use AI-based tools in the employment context with current employees or prospective hires?
  • Does your use of AI-based tools in the employment context provide reasonable accommodations for individuals with disabilities to ensure they are rated or screened accurately and fairly?
  • Does your use of AI-based products have any adverse impact on individuals of a particular race, color, religion, sex, national origin or another protected characteristic?

Third party management and liability

It is important to recall that regulators will not only scrutinize AI developers, but also those integrating, deploying, plugging into or modifying AI systems developed by others. To date, we have seen a larger, though not exclusive, focus on personal data risks in questions related to third-party management. These include:

  • Are you a controller, joint controller or processor of personal data used in and resulting from the use of AI-based products?
  • What policies and procedures do you have in place to assess and mitigate risks of access or exposure of personal information resulting from application programming interface integrations, plugins or other means of using your AI-based products?
  • Which roles or offices are responsible for assessing and managing the risks associated with third-party use of or integration with your AI-based products?
  • What policies do you have in place related to testing, verifying, auditing or monitoring third-party use of or integration with your AI-based products?
  • What documentation, contracts or organizational or technical measures do you require from third parties related to their use of or integration with your AI-based product?
  • To what extent are third parties allowed to or prohibited from fine tuning your AI models or products?

France's DPA, the Commission nationale de l'informatique et des libertés, will soon publish recommendations on the "the sharing of responsibilities between the entities which make up the databases, those which draw up models from that data and those which use those models," among other topics.

Next steps

Answering these questions is only the starting point of the necessary risk and legal analyses and response, which could vary greatly by jurisdiction. For a fuller picture of AI governance considerations, the recently released IAPP AI Governance Professional Body of Knowledge is a helpful resource.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

4 Comments

If you want to comment on this post, you need to login.

  • comment Sophie Romaniello • Aug 3, 2023
    Thanks Caitlin for this brilliant overview! Most helpful.
  • comment Teresa Schoch • Aug 8, 2023
    Excellent!  Thank you for such a comprehensive overview.
  • comment CHUA Teck Leong • Aug 15, 2023
    Good update on the state of affairs about the various approach to regulate AI implementation.  We can safely conclude that no laws have been made to-date to meet the challenge.  
    
    On the reference to China, it must be clear that their system of passing is law differs from others.  The Interim Rules are founded on existing laws (see Article 1). 
    
    The next questions is are we trying too hard to do the impossible?  On one hand innovation should be self-regulated to achieve maximum effectiveness but the conduct of innovation must be maintained at a certain legal/moral standards.  
    
    Are the existing laws sufficient to draw in compliance? If so, why not look into this and tighten the screws (just like what China is doing).  This way, the controls can be tested and in time be enacted as laws. 
    
    If not, bearing in mind laws with criminal obligations are not often retrospective, what good if the law finally finds it's footing but the subject matter has move on to another state of affairs.  The challenge law faces is always a catch up game.  It's an observation that AI or the concept of AI started years ago and no serious effort was ever embarked to hold it down.  
    
    Last but not least, when the legislature can catch up with the evolution of AI is anybody's guess.
  • comment May Sethaphanich • Aug 25, 2023
    Privacy regulators should clarify their expectations with regards to how the right to be forgotten is to be fulfilled by AI developers in relation to personal data used to train models.