The U.S. National Science Foundation announced a two-year pilot program for a collaborative artificial intelligence resource lab, setting in motion an effort to make research around the technology more readily available.
The National Artificial Intelligence Research Resource was one of the tasks given to the NSF in U.S. President Joe Biden's October 2023 executive order on AI. The agency was directed to work with other relevant regulators to create the program within 90 days of the order taking effect. Ten other federal agencies will be involved in the effort, and the NSF secured commitments from significant AI players such as IBM, OpenAI and the Allen Institute for AI to contribute projects, research and data.
During a press briefing, NSF Director Sethuraman Panchanathan billed the project as a key step in building the infrastructure needed to ensure future innovations in AI are responsible and trustworthy, helping to spur competition in the international AI industry. "Which means that we need resources to advance AI that is open to all so that every community across our nation may reap the benefits of AI," Panchanathan said.
Ensuring responsibility and trustworthiness is a major focus for regulators and the public alike. Pew Research Center found more Americans are worried about AI's prevalence in daily life, especially the fear that it might negatively affect their privacy. Biden's AI executive order emphasizes that developers must be held to high standards to protect human rights, so "Americans trust AI to advance civil rights, civil liberties, equity, and justice for all."
The NSF pilot is narrowly focused on providing the computing power, datasets, models, software, training and user support to researchers that may be inaccessible otherwise. Tess deBlanc-Knowles, the special assistant to the director of AI at the NSF, said the tighter scope will help create benchmarks for trustworthiness — an area, she noted, that is still being defined.
"We're intentionally designing a pilot with these in mind and to help build community consensus around questions like data standards, vetting procedures and responsible use of data," she said. "We just think it's so critical to engage the research community on how to tackle these challenges effectively."
The program will be divided into four focuses: NAIRR Open, dedicated to open AI research; NAIRR Secure, targeted toward privacy and security; NAIRR Software, studying interoperable use of AI software; and NAIRR Classroom, focused on connecting different communities through education and outreach.
A portal for researchers to apply for access to the pilot went live on 24 Jan. The NSF plans to put out a call for proposals later in the spring to encourage more partners to apply and contribute.
Information compiled by the resource will be placed under an evaluative process run by an external ethics advisory committee, NSF Director of the Office of Advanced Cyberinfrastructure Katie Antypas said. The hope is to eventually support up to 400 projects and eventually expand its areas of interest to environmental, infrastructure, health care, human health and AI education.
Contributions from partners range from software applications — part of NVIDIA's USD30 million contribution is USD24 million worth of computing on its DGX platform using AI tools — to access rights, such as free licenses to developer Weights and Biases platform. OpenAI is throwing in USD1 million in credits for model access for research related to AI safety, while Microsoft contributed USD20 million in computing credits for its Azure cloud computing product.
Storing that research in one place will allow academics to conduct larger tests to remove problematic behaviors and test new training theories, according to NSF Division of Information and Intelligent Systems Director Michael Littman.
"So basically, when you have enough data and enough computing power, you can start to see some different things that really aren't visible when you're really focused on smaller scale experiments," he said.