TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | Connecticut takes a first stab at regulating government use of AI Related reading: Overview of global AI governance law and policy — Part 3: US

rss_feed

""

Artificial intelligence can and has already positively impacted the work of government agencies, making it more efficient to analyze data and make critical decisions related to the issuance of government benefits, hiring and more. However, state agencies that use AI-enabled tools may not necessarily understand them, and there are ample examples of AI systems that render biased, discriminatory or inaccurate outcomes. As state-level legislation continues to focus on the development and use of AI by government entities, Connecticut is first to cross the finish line to regulate the state's use of AI with its passage of SB 1103, An Act Concerning Artificial Intelligence, Automated Decision-Making and Personal Data Privacy. The Act was signed into law 7 June 2023, and certain provisions take effect immediately. Although the final bill is less ambitious than what was originally proposed earlier this year, Connecticut has made a huge step toward the regulation of government AI procurement and has laid the groundwork for the Connecticut legislature to pass a private sector AI bill next year.

SB 1103 — Then and now

At a high level, the act provides a framework for state agency use of systems that employ artificial intelligence and restricts state agencies from employing AI systems that have not undergone impact assessments, or that have been shown to result in unlawful discrimination or disparate impact against specified individuals or groups. Additionally, the act tackles data privacy by requiring state contracting agencies to have a provision in every contract stating that the third-party business will comply with the Connecticut Data Privacy Act, where applicable.

The bill as originally proposed would have established an Office of Artificial Intelligence and AI Advisory Board, created specific roles responsible for developing and implementing policies related to AI use, e.g., the AI officer and AI implementation officer, and formed a temporary task force. The AI officer would be responsible for creating procedures while the AI implementation officer would be responsible for inventorying all automated systems used by each state agency. The role of the task force would then be to develop and make recommendations to create an AI Bill of Rights in alignment with the White House's "Blueprint for an AI Bill of Rights."

Rather than adopting the original structure, the act instead gives the Office of Policy and Management the responsibility of developing such policies and procedures by 1 Feb. 2024, while assigning the Department of Administrative Services responsibility for performing ongoing assessment of such systems. To achieve ethics and equity in AI deployment, the policies are expected to:

  1. Provide guidance on how state agencies should develop, use and consistently assess such AI systems.
  2. Ensure that no AI system actively utilized by state agencies results in unlawful discrimination or disparate impact, aligned with federal and state data privacy and discrimination laws.
  3. Require state agencies to assess any potential impact before implementing the use of these AI systems.
  4. Provide for DAS's performance of ongoing assessments ensuring the compliance of AI systems.

The act also extends these requirements to the Judicial Department to conduct an annual inventory of deployed AI systems, and to develop policies and procedures meeting the above expectations for the Judicial Department's AI use.

While the act discards the Office of Artificial Intelligence and AI Advisory Board, it restructures the temporary task force to a 21-member working group consisting of stakeholders and experts in areas related to AI to develop a report of best practices for regulating AI implementation in government moving forward. A consistent theme across the act is transparency as all inventory reports required under the act must be publicly shared online.  

In summary, the act requires:

  • DAS inventory agency systems using AI by 31 Dec. 2023 and make the inventory publicly available on the state's open data portal.
  • OPM develop and establish policies and procedures for responsible government AI use by state agencies no later than 1 Feb. 2024 and make them available on the office’s website.
  • DAS perform ongoing assessments of such systems in accordance with OPM's policies and procedures beginning 1 Feb. 2024.
  • The Judicial Department conduct inventory of the department's systems using AI by 31 Dec. 2023 and make them available on the department's website.
  • The Judicial Department develop and establish policies and procedures for responsible government AI use by the department by 1 Feb. 2024 and conduct ongoing assessments of such systems.
  • Beginning 1 Feb. 2024, no state agency nor the Judicial Department implement any AI systems that result in any unlawful discrimination or disparate impact.
  • On or after Oct. 1, 2023, no state agency enter any contract with a business unless the contract contains a provision requiring compliance with all applicable data privacy provisions.
  • Effective from passage, a working group be established to make recommendations for equitable and ethical AI use in state government and provide a report by 1 Feb. 2024 upon which the working group will terminate.

Although this act focuses on the actions of the public sector, it still remains relevant for private businesses, specifically those that contract with the state of Connecticut.

Key takeaways for businesses

If your organization provides AI-enabled solutions and conducts business with the state of Connecticut, or is interested in doing so, there is no time like the present to consider the following steps:

  • Understand the scope and breadth of the Connecticut Data Privacy Act and be prepared to demonstrate compliance with its provisions.
  • Conduct internal or external auditing of your AI-enabled solutions and be prepared to demonstrate that the usage of such systems do not (i) result in any unlawful discrimination against any individual or group of individuals, or (ii) have any unlawful disparate impact on any individual or group of individuals on the basis of any actual or perceived differentiating characteristic, including, but not limited to, age, genetic information, color, ethnicity, race, creed, religion, national origin, ancestry, sex, gender identity or expression, sexual orientation, marital status, familial status, pregnancy, veteran status, disability or lawful source of income.
  • Be prepared to assist your state agency customers with satisfaction of the act's impact assessment obligations.

Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.