TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| AI lang syne: A look back on 2023 and considerations for 2024 Related reading: Biased AI systems face the music: Analyzing the FTC's Rite Aid enforcement

rss_feed

""

2023 marked a significant shift in artificial intelligence technology and ushered in a flood of laws and standards to help regulate it. Here's a look at the major AI events of 2023, what may come in 2024 and some practical tips for responding to the challenges and opportunities that lie ahead.

AI developments in 2023

In the U.S., the Federal Trade Commission put businesses on notice that existing laws, such as Section 5 of the FTC Act, the Fair Credit Reporting Act and the Equal Credit Opportunity Act, apply to AI systems. This past year, the FTC brought actions against Ring, Edmodo, and Rite Aid for violative practices involving AI. Its latest action against Rite Aid resulted in an order with requirements such as fairness testing, validation of accuracy, continuous monitoring and employee training. Commissioner Alvaro Bedoya described the order's requirements as a "baseline" for reasonable algorithmic fairness practices. The FTC has also made clear through its actions this year that it will continue to use model deletion as a remedy.

On 30 Oct. 2023, U.S. President Joe Biden issued the "Executive Order on Safe, Secure, and Trustworthy AI Development and Use of Artificial Intelligence," recognizing the benefits of the government's use of AI while detailing core principles, objectives and requirements to mitigate risks. Building off the executive order, the Office of Management and Budget followed with its proposed memo "Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence." The OMB memo outlines requirements for government agencies as they procure, develop and deploy AI. Together, the documents call on federal agencies to produce more specific guidance. While the executive order, the forthcoming final OMB memo and agency guidance apply to the federal government, companies providing services to the government will also be subject to these requirements.

There has also been an evolving state, and city, AI policy landscape throughout the U.S. The past year has seen a flurry of regional action on AI. Under state omnibus privacy laws, Colorado finalized rulemaking on profiling and automated decision-making and California proposed rulemaking on automated decision-making technologies. Several other states also passed similar laws that provide an opt-out for certain automated decision-making and profiling and other state and city laws focused on particular applications of AI, including child profiling, writing prescriptions, employment decisions and insurance.

Some states also spent 2023 establishing laws focused on government-deployed AI. For example, Illinois and Texas established task forces to study the use of AI in education and in government systems and the potential harms AI could cause to civil rights. Connecticut also passed legislation establishing a working group on AI and requirements for government. Additionally, in September 2023, Pennsylvania's governor issued an executive order establishing principles for government-deployed AI.

Beyond the U.S., the EU and other countries and international bodies have also moved to regulate AI systems.

On 8 Dec. 2023, the EU reached political agreement on the AI Act, its comprehensive framework for the regulation of AI. The act scales requirements based on the risk-level of the underlying AI system. It specifically bans certain practices that are an "unacceptable risk," applies strict requirements to practices that are "high risk," requires enhanced notice and labeling for systems that use AI and are a "limited risk," and allows voluntary compliance, such as codes of conduct, for systems that are "minimal risk." The act applies a separate tiered compliance framework for general-purpose AI models (including certain large generative AI models) with enhanced obligations for models that pose systemic risks. Once the text is finalized, it is expected to come into force sometime in this summer.

Canada also launched a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems in September 2023 and continues enactment efforts on the Artificial Intelligence and Data Act. China had more AI regulation take effect in 2023, including those related to deep synthesis in internet information service and the management of generative AI. Discussions on international standards also took place in 2023 with G7 industrial leaders collaborating through the Hiroshima AI Process yielding principles and a code of conduct for organizations developing advanced AI systems.

Trends to expect in 2024

2024 will bring more adoption and novel uses of AI tools and systems by government, private entities and individuals. As a result, more legislation and regulatory scrutiny around the uses of AI is expected.

Around the globe, in addition to the EU AI Act taking effect, more countries will likely consider and pass AI laws. As with GDPR, many will model laws on the EU AI Act. Additionally, while Canada's AIDA regulations may be finalized in the coming year, the provisions of AIDA would not come into effect for another two years.

In the U.S., more states will likely require data protection assessments for profiling and automated decision-making, including in the context of advertising, and possibility of opt-in for profiling as proposed in some pending bills. Several state laws are also being proposed regarding AI in employment contexts, including notice to employees and other restrictions for use in employment decisions and monitoring, requirements for bias and disparate impact analysis, and rights of employees to request information used in context of AI processing. Additionally, more laws and enforcement activities will continue to focus on preventing discriminatory harms in the context of credit scoring, hiring, insurance, health care, targeted advertising, and access to essential services and disproportionate impacts of AI on vulnerable persons, including children.

Practical tips for AI governance in 2024

With so much change coming, it can be hard to know where to focus your current AI governance. Consider the following practical tips as you head into 2024:

1. Develop and update AI processes, policies and frameworks

Have a process in place to keep up to date with changes in AI technologies, laws, use cases and risks. This will help ensure you have up-to-date information to keep policies and frameworks current and compliant.  

Create accountability by designating personnel responsible for your AI program and have a process to train personnel about AI policies and use of frameworks.

In developing policies and frameworks, consider the life cycle of your AI systems and tools, from the data used to train AI models in development, to data inputs and outputs processed in production. Policies and risk assessment frameworks should be updated to identify and address risks specific to AI systems. For example, policies and frameworks should address: securing AI systems and data; incident response procedures; data sourcing practices; data minimization and retention; assessing and monitoring systems for data integrity, bias, safety, and discriminatory or disparate impacts to individuals; assessing consequences, rate and likelihood of inaccurate outputs; and societal harms.

Review external policies and statements about your AI systems and data practices to ensure they align with your policies and properly disclose and accurately reflect information learned through inventories and risk assessments.

2. Put policies to action – conduct AI inventories and risk assessments, monitor vendors

Conduct an inventory of existing AI systems. Identify and document the various AI systems in use, the content and data they process, the outputs they produce, and any down-stream recipients of data or content. Once you have conducted an AI inventory, use this information to conduct an AI risk assessment considering particular risks described above.

Don't overlook third-party AI solutions and use of AI by third party vendors as part of your assessment. For third party AI solutions, request their AI policies and administer AI due diligence questionnaires. Also consider provenance of data used to develop their AI tools. Review the types of data sets used to train AI algorithms and the types of purposes for which the AI tools were developed and evaluate whether those reflect the types of data and purposes in your intended deployment. And review these tools and your more traditional vendors to learn if your data is being used for their own AI purposes (or others).

3. Leverage existing principles and resources for today and champion flexibility for tomorrow

As organizations grapple with new challenges, changing landscapes and uncertainty posed by AI technologies and regulation, it is easy to get overwhelmed. For areas of uncertainty, you can achieve some clarity and purpose by centering AI governance on your established organizational values and principles. And remember, many AI governance resources already exist.  

Initial AI governance efforts will need to continuously adapt as new technologies, use cases, laws and regulations, and market standards evolve. As a result, AI governance efforts should encourage flexible strategies. For example, using compartmentalization and machine unlearning methods may help businesses retain models when the initial training data becomes unusable or problematic, due to legal or other reasons, without needing to delete and rebuild a model in its entirety. AI professionals should set such expectations for flexibility early and often in 2024 and in the years to come.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.