TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | US Senate hearing showcases consensus for AI guardrails Related reading: GPS 2023: AI opportunities for privacy professionals

rss_feed

""

""

""

The path to meaningful oversight of artificial intelligence in the U.S. may manifest through legislation crafted by U.S. Congress. A bipartisan fact-finding mission seeking balanced regulation is ongoing, including by the U.S. Senate Committee on the Judiciary's Subcommittee on Privacy, Technology and the Law, which probed key stakeholders in the current AI boom.

The hearing set out to establish the subcommittee's base knowledge on AI while casting a wide net on how to best regulate the harms algorithms and emerging AI technologies are capable of while preserving their benefits.

OpenAI CEO Sam Altman, whose company is arguably in the epicenter of the AI frenzy with its popular generative AI tool, ChatGPT, told lawmakers Tuesday that innovation must remain possible with any regulations put in place. But, he also made it clear that rules for the road will be necessary to protect consumers and AI tools from risk.

"I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that," Altman said. "We want to work with the government to prevent that from happening, but we try to be very clear-eyed about the downside case and the work we have to do to mitigate that."

He characterized the current AI landscape as a potential "printing press moment."

Regulation and a regulator

A common theme through the hearing was ensuring AI oversight is addressed before the technology gets out of hand. 

Lawmakers vowed to avoid letting AI deployments take off too much before setting guardrails, admitting they missed the boat previously on other digital revolutions, such as social media and corporate liability related to Section 230 of the Communications Decency Act.

"Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past," subcommittee Chair Sen. Richard Blumenthal, D-Conn., said. "Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real."

IBM Vice President and Chief Privacy and Trust Officer Christina Montgomery, a member of the IAPP Governance Center Advisory Board, assured the subcommittee that it had hardly missed its opportunity and lawmakers were diving in at "precisely the time" they needed to.

However, she implored Congress to adopt a "precision regulation approach to AI."

"This means establishing rules to govern the deployment of AI in specific use cases and not regulating the technology itself," Montgomery said.

There should be "different rules for different risks," Montgomery said, noting the impacts from a chatbot that shares restaurant recommendations differs dramatically from a system that makes decisions based on credit, housing or employment. 

Tied to risks, she called for guidance "on AI end uses or categories of AI-supported activity that are inherently high-risk. ... Risk can be assessed in part by considering the magnitude of potential harm and the likelihood of occurrence." 

She said companies need to be transparent so people know when they are interacting with an AI system and "whether they have recourse to engage with a real person, should they so desire." AI developers should disclose technical information about a system, "as well as the data used to train it, to give society better visibility into how these models operate." 

Montgomery also called for impact assessment requirements to demonstrate how systems "perform against tests for bias" and other significant impacts. 

Blumenthal said AI innovation is capable of following "basic expectations common in our law." Transparency, testing and results disclosures, and "scorecards or nutrition labels" were among his pitches for baseline requirements developers could adhere to.

Altman supported the notion of regulation on certain aspects of AI, but pitched a more general regime focused on "a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities."

Sen. Dick Durbin, D-Ill., queried which U.S. enforcement body could "respond to the challenge" of reining in AI developers.

New York University Professor Emeritus Gary Marcus said the U.S. Federal Trade Commission or the Federal Communications Commission could assume duties if responsibilities needed to fall to an established agency. However, he indicated AI's unique and broad challenge may call for a novel enforcement approach.

"We probably needed a cabinet-level organization within the U.S. in order to address this," Marcus said. "The number of risks is large. The amount of information to keep up on is so much. We need a lot of technical expertise and coordination of these efforts.

"I've personally suggested we might even want an international agency for AI," Marcus added. "Even from the perspective of companies it might be a good thing. They don't want a situation where you take these models, which are expensive to train, and you have to have one for every country … and for each jurisdiction a company has to train another model."  

If AI enforcement was to fall on an existing agency in the U.S., the FTC has recently made clear it's prepared to respond.

The commission is already rolling out guidance notes on AI matters within its purview, and FTC leadership is chiming in about the agency's role. FTC Chair Lina Khan recently said the commission "will vigorously enforce" its statutes where applicable while FTC Commissioner Alvaro Bedoya cited applicability of Section 5 of the FTC Act while discussing AI enforcement at the IAPP Global Privacy Summit 2023.

Governance practices

In the absence of federal rules, witnesses made clear internal guardrails and principles to manage AI risk continue to be essential and top of mind. Blumenthal urged AI developers to consider governance practices now, noting "the AI industry doesn't need to wait for Congress" to install actionable risk mitigation.

According to Montgomery, IBM moved ahead of the governance curve years ago while realizing AI had the chance to grow as quickly as it has. She pointed to IBM's efforts to designate an AI ethics representative and an AI ethics board, urging any AI developer to explore similar oversight structures to ensure "ethics and trustworthiness are key to AI adoption."

Montgomery, who recently appeared on an IAPP LinkedIn Live on AI governance, said companies "must build, not undermine trust" and that this is no longer the moment to "move fast and break things." She said, "the need for reasonable policy and sound guardrails is clear. ... Congress and the business community need to work together to get this right." 

"If businesses do not behave responsibly in the ways they build and use AI, customers will vote with their wallets. And with AI, the stakes are simply too high, the technology too powerful, and the potential ramifications too real," Montgomery said. "If a company is unwilling to state its principles and build the processes and teams to live up to them, it has no business in the marketplace."

Altman outlined the many safety measures OpenAI has taken as it devises updated models of ChatGPT and its other AI-powered tools, including independent safety and risk assessments. He said the launch of ChatGPT-4 was delayed six months as the company executed a vigorous risk review.

The key to OpenAI's governance processes is to let them evolve as the technology does, he said.

"We are investing in developing enhanced evaluations for more powerful models, including assessing AI models for capabilities that could be significantly destabilizing for public safety and national security, so we can develop appropriate mitigations prior to deployment," Altman said in his written testimony.

"Addressing safety issues also requires extensive discussion, experimentation, and engagement, including on the bounds of AI system behavior. We have and will continue to foster collaboration and open dialogue among stakeholders to create a safe AI ecosystem."

Concerns

A chief concern raised by Senate Subcommittee on Privacy, Technology and the Law Ranking Member Josh Hawley, R-Mo., was the risks posed to personal data that's used to train AI systems.

Hawley fears "an AI system that is trained on that individual data that knows each of us better than ourselves" and the personal data "supercharging" that could "allow individual targeting of a kind we have never even imagined before." Such influence could affect democratic elections, promulgate misinformation and even potentially impact self-determination. 

None of the witnesses work for ad-driven businesses, but they agreed Hawley was rightly alarmed by theoretical hyper-targeting through AI. Altman said he thinks some companies are already deploying AI for enhanced ad-targeting while NYU's Marcus provided a clearer look at how it may come to be.

"Maybe it will be with open-source language models, I don't know. But the technology is let's say partway there to being able to do that and will certainly get there," Marcus said. 

Sen. Cory Booker, D-N.J., shares Hawley's concerns from the standpoint of "corporate intention" and the race to collect an audience. Such a battle to be first may leave governance principles behind. "What happens when the companies that are already controlling so much of our lives … are dominating this technology as they were before?"

"It's important to democratize the inputs to these systems and the values we are going to align to. And it's important to give people wide use of these tools," Altman said. "There needs to be incredible scrutiny on us and our competitors. There is a rich and exciting industry happening with research and new startups. ... It's important to make sure that with regulatory steps or new agencies that may or may not happen, we preserve that fire."

Responding to recent calls by Elon Musk and others to place a moratorium on AI development, Blumenthal said, "the world isn't going to wait. Sticking our head in the sand isn't going to help." 

Montgomery added, "we need to prioritize ethics and safety, but I'm not sure how practical it is to pause" AI development and research.  

Privacy and AI Governance Report

This report explores the state of AI governance in organizations and its overlap with privacy management.

View Here


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.