The U.S. took a big step in the development of a national artificial intelligence strategy with the release of the U.S. Department of Commerce National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework 1.0, Jan. 26.
Required under the National AI Act of 2020, the framework is the product of 15 months of work by NIST scientists who compiled public comments from more than 240 AI stakeholders through multiple listening sessions and workshops, while producing two previous drafts of the document last year. The framework is voluntary but will help organizations deploying AI systems to enhance their trustworthiness and reduce biases, while protecting individuals’ privacy.
Along with the framework document, the NIST also released the AI RMF Playbook, which is expected to be updated every six months as best practices for navigating the framework develop, according to Under Secretary of Commerce for Technology and NIST Director Laurie Locascio.
“The AI RMF will help numerous organizations that have developed and committed to AI principles convert those principles into practice,” Locascio said. “It is intended to be applied across a wide range of perspectives, a wide range of sectors and technology domains and should be universally applicable to any AI technology.”
NIST’s AI RMF 1.0 is the latest example of world governments attempting to promote responsible and useful AI systems, while mitigating the potential negative impacts of machine learning algorithms, as the technology is poised to revolutionize all aspects of socioeconomic life.
The EU may be furthest along in developing a regulatory ecosystem for the deployment of AI technology with its proposed AI Act. If passed, the act would be “a highly prescriptive framework addressing AI as a product liability issue governed by certification schemes and regulatory oversight,” according to Goodwin Procter Partner Omer Tene. Across the EU, member states’ data protection authorities have begun dedicating resources to AI regulation. France’s data protection authority, the Commission nationale de l'informatique et des libertés, recently announced it will launch an AI division.
In October 2022, while work on the NIST’s AI RMF framework was ongoing, the White House Office of Science and Technology published an AI Bill of Rights blueprint. In its recommendations, the Biden administration offers "five common sense protections to which everyone in America should be entitled," which call for the deployment of “safe and effective systems,” protection from algorithmic discrimination, data privacy, a notice requirement when an automated system is being used and a dedicated human alternative on standby to fix problems with a given organization’s system.
During the AI RMF announcement, White House Office of Science and Technology Policy Principal Deputy Director for Science and Society Alondra Nelson said the NIST was instrumental in the development of the AI Bill of Rights blueprint. She said, along with AI’s potential benefits to “the way we work, learn, how we address healthcare, how we find good jobs,” it heightens the risk of eroding civil liberties and individual privacy, and promoting entrenched biases against vulnerable populations.
“The United States is taking a principled, sophisticated approach to AI that advances American values and meets the complex challenges of this technology,” Nelson said. “It's why, at the same time (as it developed the framework), NIST was at the table as OSTP developed the blueprint for an AI Bill of Rights, helping us set out practices that can be used to address one critical category of risk: The potential threats posed by AI and automated systems to the rights of the American public.”
As for the AI RMF, U.S. Deputy Secretary of Commerce Don Graves said the framework is the critical first step to enhancing "AI trustworthiness while managing risks based on our democratic values.” However, he said the framework’s effectiveness will ultimately be judged on how actionable it can be for businesses, especially small businesses relying on technology solutions to compete with larger companies.
“How readily will smaller companies be able to put the framework into practice?" he said. "How can we make it more likely that small businesses will be able to put in practice, and what will be the biggest challenges? Will the framework be incorporated into standards, guidelines and practices that are adopted in the U.S. and around the world? Of course, the key question is whether the framework and its use will lead to more trustworthy and responsible AI.”
As part of the NIST’s announcement, Locascio facilitated a discussion with U.S. Chamber of Commerce Vice President Jordan Crenshaw, CIPP/US and Center for Democracy and Technology President and CEO Alexandra Reeve Givens about the practical application of the framework throughout the economy, as well as its possible social impacts.
Crenshaw said the development of the framework was “an open and transparent process” partially due to input into the final document from “the business community and civil society.” He said because the framework is not a mandate, it is likely small businesses will be able to adopt more elements of the framework as they grow and scale without fear of regulatory penalties.
“One of the key parts about this framework, as well as the playbook, is its flexibility,” Crenshaw said. “It's the admission that we're going to come back to this every few months and look at how it plays out, and we'll reevaluate as it goes along because technology evolves.”
How organizations implement the AI RMF framework is paramount, Givens said. She used the example of how AI systems are already used by companies to screen job applicants and underscored the need for a bottom-up and top-down understanding of AI's purpose and capabilities, both positive and negative, from all parties using the technology.
“Countless people across the country and around the world are being assessed through automated tools . . . so it is an area where there are real risks of discrimination and fitness for purpose, and it's an area where we have to think about that entire chain,” Givens said. “The framework is a good start, but actually, we're going to have to get a lot more specific to help people actually see themselves in these guidance documents and know the rules of the road. That’s why the integrated approach is so important.”
Top image: NIST Director Laurie Locascio
This report explores the state of AI governance in organizations and its overlap with privacy management.
If you want to comment on this post, you need to login.