TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

United States Privacy Digest | A view from DC: White House preps a 'bridge' to AI regulation Related reading: A view from DC: The professor and the politicos

rss_feed

""

U.S. President Joe Biden is expected to issue a new executive order on artificial intelligence in the coming weeks. Its content and scope have become the subject of much speculation in Washington.

For many months, multiple workstreams at the White House have touched on AI policy priorities that closely track the broader policy conversation in D.C. Concordantly, a large part of the energy of this work is taken up by issues beyond the scope of operational AI governance. As in Congress, the Biden administration is concerned about issues of competitiveness — finding ways to secure America's lead on technological development — and national security.

That said, the message that responsible development of AI tools is key to an American style of innovation has increasingly been heard echoing across the policy conversation. But whether, and how, this will be reflected in the forthcoming executive order are still open questions.

Some official hints about the scope of the executive order have started to emerge. At a Chamber of Commerce event this week, U.S. Deputy National Security Advisor Anne Neuberger described the order as "incredibly comprehensive." According to NextGov's reporting of the event, Neuberger went on to say it is "a bridge to regulation because it pushes the boundaries and is only within the boundaries of what's permissible … by law." MLex reported Neuberger's remarks focused on the importance of solving the challenge of watermarking for AI-produced content to help track provenance.

At the DEF CON hacker convention in August, White House Office of Science Technology and Policy Director Arati Prabhakar told reporters that setting federal government policies on AI has become an urgent priority for the administration. "It's not just the normal process accelerated — it's just a completely different process," she said.

The executive order may also find avenues to build on the voluntary commitments the White House received from leading AI companies, which will also be one of the proposals the U.S. brings to the table at the upcoming U.K. AI safety summit, according to MLex's reporting of Neuberger's remarks.

Importantly, the administration seems mindful of the global geopolitical context of AI development and seems intent to share the mic with allies who are also advancing new AI principles. In announcing the most recent set of voluntary commitments, the White House said they were developed in consultation with 20 other listed governments. Further, the administration acknowledged the commitments "complement Japan's leadership of the G-7 Hiroshima Process, the United Kingdom's Summit on AI Safety, and India's leadership as Chair of the Global Partnership on AI."

Meanwhile, federal agencies are continuing to enact the policies mandated by President Biden's prior order on AI. Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, directed agencies to implement principle-based policies on the deployment of AI systems. The Department of Homeland Security this month announced new policies from its AI Task Force, including one on the "acquisition and use of artificial intelligence and machine learning by Department of Homeland Security components," which directly responds to Executive Order 13960.

The other new DHS policy, Directive 026-11, goes further, covering the "use of face recognition and face capture technologies" by the agency. Some highlights of the policy:

  • All uses of facial recognition and face capture technologies must be thoroughly tested to ensure there is no unintended bias or disparate impact in accordance with national standards.
  • Citizens must be afforded the right to opt-out of face recognition for specific, nonlaw-enforcement uses.
  • Facial recognition is prohibited from being used as the sole basis of any law or civil enforcement related action.
  • All new uses of these technologies must go through an oversight process via the Privacy Office, the Office for Civil Rights and Civil Liberties, and the Office of the Chief Information Officer.

At the same time, DHS Secretary Alejandro Mayorkas appointed Chief Information Officer Eric Hysen to add a title to his office, serving as DHS's first chief AI officer. According to the press release, "Hysen will promote AI innovation and safety within the Department, along with advising Secretary Mayorkas and Department leadership on AI issues." The Senate Committee on Homeland Security and Governmental Affairs recently approved a bill that would create a chief AI officer role at each federal agency.

Like the DHS, agencies across the federal government seem to be reflecting on how AI will impact their missions, whether by executive order or legislative powers. Director Prabakar's comments to reporters at DEF CON also.mentioned how encouraged she was by the work of federal agencies, "They know it's serious, they know what the potential is, and so their departments and agencies are really stepping up," she said.

The homepage of the official website for the National AI Strategy now provides direct links to AI landing pages from 27 agencies and departments ranging from the Department of Defense's Chief Digital and Artificial Intelligence Office, AI.mil, to the Department of Education's Office of Educational Technology, which recently published a report on AI and the Future of Teaching and Learning.

Another major development is expected in the form of updated guidance from the Office of Management and Budget, arguably the most influential agency when it comes to coordinating and implementing cross-government policy initiatives.

Although earlier White House announcements said the OMB draft guidance would be released "this summer," it has still not become available for public comment. Once implemented, new OMB policies will update its 2020 memorandum to the heads of all executive department agencies to "inform the development of regulatory and non-regulatory approaches regarding technologies and industrial sectors that are empowered or enabled by AI and consider ways to reduce barriers to the development and adoption of AI technologies."

Advocates from across the spectrum have engaged on the forthcoming executive order. Many joined the Leadership Conference on Civil and Human Rights in calling on the Administration to make its AI Bill of Rights binding policy across the federal government. Others, like the U.S. Chamber of Commerce, argued for making targeted adjustments to immigration policies to attract and retain top AI talent in the U.S.

Even if the details are still a mystery, it is clear the order will include a wide-ranging set of policies that extend to the limits of the executive branch's legal powers. At least when it's not in a state of shutdown, the federal government will continue to act on AI.

Here's what else I'm thinking about:

  • The foundation of AI efforts should be a comprehensive federal privacy law. In remarks at a recent Forum Global event, Representative Cathy McMorris Rodgers, R-Wash., chair of the House Energy and Commerce Committee, reminded policymakers that privacy legislation would help bake protections for consumers into AI development and deployment practices. "I worry that lawmakers might lose focus on what should be the foundation for any AI efforts, which is establishing comprehensive protections on the collection, transfer and storage of our data," she said. "It is paramount that we do this before jumping into any AI legislation." She also highlighted the importance of protecting children online by passing comprehensive legislation, claiming that the American Data Privacy and Protection Act would have the "strongest kids protections of any federal or state law." McMorris Rodgers said she was personally committed to "doing everything in my power to reach consensus" on privacy legislation.
  • Another important set of standards for the use of AI in the employment context. The workplace domain continues to show it is at the forefront of governance innovations in AI, this time based on a convening hosted by the Future of Privacy Forum. Along with ADP, Indeed, LinkedIn, and Workday, FPF released a report titled Best Practices for AI and Workplace Assessment Technologies. Alongside other industry-led principles, local regulations and federal scrutiny, the employment context serves as a testing ground for AI best practices in privacy, nondiscrimination, human oversight and transparency.
  • How do we protect human rights in immersive technologies? A new report from the NYU Stern Center for Business and Human Rights examines two of the most pressing issues related to the development of immersive technologies: the potential erosion of privacy, including mental privacy, and the proliferation of harmful behavior in virtual environments, including sexual harassment and abuse of children. It includes a set of privacy recommendations for extended reality platforms and policymakers.

Upcoming happenings:

  • 26 Sept.: Connected Health Initiative hosts a conference titled AI and the Future of Digital Healthcare (National Press Club).
  • 27 Sept.: The monthly Tech Policy Happy Hour will take place (Dirty Habit).
  • 27 Sept.: Politico hosts the AI and Tech Summit (hybrid).
  • 27-28 Sept.: Fischer Phillips hosts a conference titled AI Strategies @ Work: Preparing Business Leaders for Tomorrow (Willard Intercontinental).
  • 28 Sept.: The Information Technology Industry Council hosts a conversation with OSTP Director Arati Prabhakar on building responsible AI (ITI).
  • 28 Sept.: Public Knowledge hosts its 20th annual IP3 Awards (Ronald Reagan Building).

Please send feedback, updates and draft EO text to cobun@iapp.org.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.