TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| Ensuring AI governance: The time to act is now! Related reading: Establishing governance for AI systems

rss_feed

""

On 30 Oct., President Joe Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, signaling the future establishment of comprehensive standards that govern the utilization and advancement of AI across various industries in the U.S. The executive order does not directly impose specific regulations. Rather, it mandates federal agencies, including the Departments of Commerce, Energy and Homeland Security, to formulate standards and guidance on AI that will be drafted into law at a later time and consequently create lasting implications for businesses that utilize the technology.

Since the issuance of the executive order, many companies that use AI in their products and services have asked when they should start to worry about AI compliance. Should they start now? Or should they wait until the U.S. Congress passes a formal piece of AI-specific legislation?

The framing of such questions underscores a popular belief that efforts to regulate AI are a recent development that has yet to tangibly impact companies utilizing the technology. However, a closer examination of existing legislation reveals a different story: the use of AI has implications for compliance with existing policies that have been in effect since the big data boom a decade ago. Indeed, many industries, such as financial services, insurance, health care and medical devices, and digital advertising and marketing, are already subject to a unique set of laws that impact how companies acting in those sectors are allowed to utilize AI. In many cases, enterprises have incorporated such laws into their internal policies in an attempt to self-regulate and prevent the occurrence of compliance violations and subsequent lawsuits following the public's discovery of those transgressions.

Financial services

The financial industry has integrated machine learning into crucial tasks such as fraud detection, loan prediction, and anti-money laundering for more than a decade. Due to the substantial impact of these tasks, their machine learning models, predominantly simple linear models, have been subjected to stringent audits, encompassing both longstanding and recent regulatory measures.

For instance, the Fair Housing Act, enacted in 1968, explicitly prohibits biases in mortgage determination. Despite being in effect for over fifty years, this law directly pertains to the utilization of models for mortgage predictions, a common practice in most banks. Additional instances include regulations from the Securities and Exchange Commission and the Federal Trade Commission, which mandate advisory firms to establish robust risk management and governance frameworks, ensuring AI is employed in the best interest of investors and is devoid of bias.

Insurance 

Insurance providers are grappling with challenges similar to those faced by the banking industry, where biases in algorithms can result in serious legal repercussions. The insurance sector is subject to robust regulation through various laws, including the Unfair Trade Practices Model ActCorporate Governance Annual Disclosure Model Act and Property and Casualty Model Rating Law.

These regulations, which were enacted long before the executive order, mandate, at a minimum, that decisions made by insurers are not inaccurate, arbitrary, capricious or unfairly discriminatory. Evidently, AI introduces the potential for increased risk in producing inaccurate, arbitrary, capricious or unfairly discriminatory outcomes for consumers. Recognizing this, the National Association of Insurance Commissioners recently issued a Model Bulletin that mandates all insurers to formulate, implement and maintain a written program, referred to as an AIS Program, for the responsible use of AI systems involved in or supporting decisions related to regulated insurance practices. The AIS Program is expected to be designed to mitigate the risk of adverse consumer outcomes.

Health care and medical devices

In recent years, the U.S. Food and Drug Administration has undertaken extensive efforts to formulate sound practices and regulations governing the utilization of AI in health care services and medical devices. In a recent regulatory framework proposal addressing AI-based software as a medical device, the FDA sought to establish a robust framework for evaluating and approving resubmissions for model upgrades. Traditionally, models in medical devices approved by the FDA were required to be "locked" upon approval. This proposed framework introduces a "model lifecycle regulatory approach," compelling manufacturers to establish a governance system capable of continuously monitoring the lifecycles of the models used in their AI/ML devices and managing associated risks. Each submission for a model upgrade must demonstrate reasonable assurance of safety and effectiveness.

Digital advertising and marketing

The advertising industry operates at the forefront of user data, historically employing such data to train models for market analysis, talent and customer identification, and lead generation. Companies in this sector face extensive regulation under numerous consumer privacy laws, including the EU General Data Protection Regulation, California Consumer Privacy Act, Computer Fraud and Abuse Act, and various other federal and state laws. Although these laws have ramifications across sectors, their effects are notably more pronounced for digital advertising and marketing, primarily because of their extensive reliance on data brokers. For instance, the California Delete Act stipulates the development of interfaces enabling users to explicitly opt out or erase their data from such companies. In response to user requests, these companies bear the responsibility to not only eliminate the user data from their systems but also explicitly request data brokers, who may possess the data, to remove it as well. Handling user data is a highly intricate task due to the substantial variations in consumer data retention policies across different countries and even different U.S. states. Training a model in a manner compliant with certain laws may inadvertently violate others.

AI governance is becoming a prerequisite for enterprise sales

The necessity for AI governance, particularly in sectors like health care and financial services, existed well before discussions on regulating AI garnered significant attention with the widespread adoption of LLM-related applications, where training such models involves acquiring substantial data. Thus, instead of viewing the executive order as a seminal moment for AI regulation in the U.S., it should be understood as a logical extension of preceding regulations that have strived to prevent corporate actors from violating users' rights to data privacy and engaging in automated, algorithm-based decision-making that carries out harmful or discriminatory practices against consumers. 

While overarching governmental regulations remain somewhat ambiguous and have yet to be translated into legislation containing a set of defined rules, specific industries have already received explicit directives to establish governance for enhanced risk management. In response to these directives, companies like Google, IBM, Airbnb and CVS have already instituted AI oversight councils that evaluate not just their internal AI-related risks but also the AI solutions they might purchase from third-party vendors.

The AI oversight councils at such companies require vendors to implement internal AI governance practices and showcase their capacity to consistently comprehend and address AI risks as prerequisites for considering their solutions. Consequently, vendors' lack of demonstrable AI governance has started to have an increasingly large impact on their sales revenue, which is why they have begun to implement safeguards throughout their model development process to enhance their brand's reputation, increase their product appeal, and ensure compliance with ever-evolving regulations. 

So, what does it take to achieve compliance with AI regulations?

While the regulations aim to prevent the malevolent use of AI, a closer examination of the wording in these regulations reveals a more profound directive for AI companies: furnish concrete evidence demonstrating the absence of risk throughout the entire developmental lifecycle of an AI model. Put differently, regulations are keying in on "how a model came to be." For instance, Article 17 of the EU AI Act mandates AI systems must be thoroughly documented, encompassing all operations conducted to deploy them. Similarly, the Model Bulletin from the National Association of Insurance Commissioners asserts that the regulatory body can request information about the data used in developing a specific model or AI system, including details on the data source, provenance, and quality.

These requirements dictate the use of a governance framework that can meticulously track all data contributing to a model and all the operations performed on that data. This effort extends beyond merely treating each model as a static entity and manually documenting it. Moreover, given the dynamic nature of machine learning, where the most current user data drives continuous improvement, it becomes crucial to acknowledge that models are constantly evolving. Treating each model as a static artifact overlooks the interconnections between various model iterations, fundamentally failing to address provenance questions related to the genesis of a model.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.