It's been an earth-shattering week for those following policy developments in the artificial intelligence governance space. Monday kicked off with a sweeping executive order from the U.S. President Joe Biden on AI safety, security and trust. The G7 issued a code of conduct just days after the United Nations named its new AI advisory board, and this week the U.K. is hosting its AI Safety Summit, issuing the Bletchley Park Declaration.
Running in tandem with these developments, the IAPP is hosting its inaugural AI Governance Global in Boston, Massachusetts, a first-of-its-kind conference focused on the professionalization of AI governance.
AI policy and law in the US
In light of these developments, U.S. policymakers and state-level regulators took time at AIGG to discuss the future of AI law and policy, and the potential implications for the federal government and the private sector alike.
White House Office of Technology and Policy Chief of Staff in the Technology Division Nik Marda said the Biden administration is taking a broad approach to AI policy, noting how benefits and risks cross industry sectors and national borders. "We tried to look at the broad impacts," Marda said, "and President Biden made it clear from the start that we needed to pull every lever across the federal government."
Marda said their approach falls within four buckets, leading off earlier this year by working with large companies deploying AI technology in order "to move the needle on what responsible AI looks like." That's where the administration secured voluntary commitments with 15 leading AI tech companies.
Second, the White House took executive action, with this week's order and related draft policy guidance from the Office of Management and Budget on how federal agencies leverage and use AI.
Importantly, the White House is calling on U.S. Congress to pass bipartisan legislation while simultaneously collaborating with international partners, including with the G7 and signing on to the Bletchley Park Declaration.
Supplementing the White House's initiatives, the National Telecommunications and Information Administration earlier this year solicited public comments on AI accountability. NTIA Policy Analysis and Development Associate Administrator Russell Hanser said the agency received more than 1,400 comments between April and June. "That's a lot for us," he said.
Hanser also noted the NTIA currently possesses a draft report documenting the input, which could be out soon.
In pulling out areas of consensus and discord in the comments, Hanser characterized three core takeaways from stakeholders in fulfilling a needed accountability ecosystem in AI.
Access to information and AI system transparency is key, he said, though conceding that "transparency is hard with respect to AI because the mathematics, the technical pieces of what goes into the algorithms can be harder to explain, but we need more access to information." He said that "transparency will also facilitate standards and norms regarding what forms that information takes, what kinds of information is made available on model cards … to make it easier to compare apples to apples for both consumers and policymakers to understand what a system does and does not do."
A second takeaway involves the need for independent evaluation and that there will be circumstances when third-party evaluations will be critical. Hanser also warned in some cases this could be "out of the sphere of company interests."
"You need some sort of consequence when the system is not working as its purported to be working," he said. These could be market consequences where consumers "move with their feet" to a competitor if they see a third-party evaluation suggests a system is not operating as was represented. It could also include a legal, mandatory liability, or regulatory regime.
There are barriers to creating an accountability ecosystem for AI, Hanser added. With incredibly complex AI systems in place, can companies effectively communicate to consumers and lawmakers how a system works in a way that is understandable?
There is also a significant need for auditors that are appropriately trained and those who have resources and computing power. Hanser said, "Our sense is that we're going to need government investment here," while also noting "we will see a set of jobs that are highly nascent."
OSTP's Marda agreed that a workforce undergirding an accountable AI ecosystem will be crucial. This need for a new workforce will come from multiple angles, including within companies, independent third-party auditors, government regulators, among others.
The Biden administration recognizes that need, Marda said, and in its executive order, lays out a number of paths for support. They include investments in organizations like the National Science Foundation to train and educate on AI literacy and diversity. Provisions around immigration were added to the order to bring talent into the U.S. from around the world.
"This is an all-hand-on-deck moment," Marda said.
The fed as AI policy influencer
Though much of the White House executive order relates to federal agencies, it will have an impact on the private sector. As part of the OMB guidance, federal agencies will need to follow provisions with integrating AI systems, which are often built and provided by private companies.
California Privacy Protection Agency Board Member Vinhcent Le said the agency is not likely to issue state-level guidance anytime soon, but what the federal government does could be influential. "I think the NIST AI Risk Management Framework was very instrumental in shaping our risk assessment requirements," he said, as an example.
Davis Wright Tremaine Partner Nancy Libin, who moderated the AIGG panel, said that "even though the U.S. government structure can operate in a way that can be kind of messy, nonetheless, we get really useful information and frameworks and guidance from different parts that actually feed into the development of frameworks and guidance by other elements of government."
"Government can lead by example," Marda said, noting that there is a saying right now at the White House: "We're trying to manage the risks to seize the benefits."
If you want to comment on this post, you need to login.