On 16 Oct., the U.S. Office of Management and Budget published a request for information on federal agency handling of commercially available information containing personally identifiable information. The OMB is focused specifically on the use of such data in systems driven by artificial intelligence.

This is an important opportunity to shape the federal government's still-developing policies on both commercially available data and AI. Comments are due 16 Dec.

The RFI speaks in terms of "privacy risks," but a narrow interpretation of privacy, which is too often reduced to mere transparency, would obscure the meaning and value of privacy as "fair information practices." Discussions of "responsible handling" of personal information must focus on consequences, as governments, like corporations, use information to make decisions about people. In the case of government, this can include highly impactful decisions on eligibility for benefits, including for veterans, disability and family support, and loans, including for housing and education, not to mention tax enforcement and criminal justice.

Last year, I collaborated with cybersecurity and public policy expert Susan Landau of Tufts University, AI developer Ece Kamar of Microsoft and computer scientist Steve Bellovin of Columbia University to convene a two-day workshop on government use of AI and other advanced decision-making tools. Participants confirmed automated systems are sometimes biased and divorced from human realities, and if their internal workings are shielded by claims of intellectual property by the contractors who develop the systems for the government, they can produce unfair decisions without redress.

The addition of commercially available information, which may be incomplete, incorrect or irrelevant, could only exacerbate the problem.

Thus one of the biggest privacy risks in the use of commercially available information and AI is the risk of erroneous, opaque decisions with severe adverse consequences for individuals, such as denial of health benefits, loans or employment and unjustified targeting in enforcement actions.

The RFI touches on some of these issues when it asks for input on "appropriate mitigation of privacy risks." One of the most powerful mitigation tools available is due process — the right to challenge adverse decisions about oneself.

A key finding to emerge from our workshop discussion last year is it is in fact possible to design decision-making systems that use the most advanced technology while still being understandable and contestable. The concrete recommendations coming out of our workshop on how to achieve "due process by design" in AI-based systems are relevant to both government agencies and private sector entities looking to reap the benefits of AI while avoiding its risks.

Our recommendations

Notice to individuals that their case has been decided based in whole or in part on an automated process is a prerequisite of contestability. Notice must be understandable and actionable.

Contestability must be incorporated into the system design, beginning with the decision whether to use an automated system in a decision-making or decision-supporting role at all.

Those who will be directly affected by an automated system must be involved in design consultations and testing.

The automated features of a system should never be allowed to supplant or displace the criteria specified in law for any given program or function. For example, if the legal standard for a disability benefit is medical necessity, the factors or criteria considered by the automated process should not be presumed to be the only way for an applicant to demonstrate medical necessity.

Contestability features of a system must be stress tested with real-world examples and scenarios before field deployment.

Integrating contestability considerations into the procurement process — the nuts and bolts of government contracting — is critical because many automated decision-making systems will be designed and built, and may be managed as a service, for the government by contractors. Solicitations and contracts must clearly require contractors to deliver contestability as a core system feature. Contractors should not be allowed to use assertions of trade secrecy or other intellectual property claims to frustrate contestation.

Federal officials should ensure contestability is required of the states implementing federal programs.

The OMB request for input on commercial data and AI offers a new opportunity for the federal government to fully embrace these key points.

Jim Dempsey is the managing director for the IAPP Cybersecurity Law Center.