Today marks a milestone in the history of data privacy: the 50th anniversary of the Fair Information Practice Principles. On 30 June 1973, the original Fair Information Practice Principles were published in the federal register as part of a report on Records, Computers and the Rights of Citizens.

The report begins simply, "This is a report about changes in American society which may result from using computers to keep records about people."

The changes never ceased. As we rush headlong into the artificial intelligence era, it is worth reflecting on the concerns raised by pioneering thinkers 50 years ago, when "digital computers" were first proliferating. The parallels are uncanny.

There is so much history packed into this single administrative report. Its significance to the spread of privacy principles has been extensively documented. In the U.S. it directly influenced the Privacy Act of 1974, which remains the core framework governing the federal government's data privacy practices. It also influenced an Organisation for Economic Co-operation and Development report that refined the FIPPs and is often cited as an ancestor of the EU General Data Protection Regulation. The agency that commissioned the report, the U.S. Department of Health, Education and Welfare, was the progenitor of the modern Department of Health and Human Services, which continues to serve as a custodian of personal records, as well as a privacy enforcer and rulemaking entity.

Like any historical framework, the FIPPs did not emerge from a vacuum. They were a product of their time, colored by the rise of computers, but even more so by concerns about the misuse of government authority. As the Department of Homeland Security summarizes in its own FIPPs memo, the backdrop of the HEW report "included several years of intense Congressional hearings examining the surveillance activities of the Nixon and J. Edgar Hoover era and the post-Watergate support for government reform."

The HEW report echoed concerns shared by other forward-thinking books and reports of its day. It built on Alan Westin's list of "criteria for weighing conflicting interests" from his 1967 book "Privacy and Freedom." The report also prominently credits three government publications:

  • "Privacy and Computers (1972), the report of a task force established jointly by the Canadian Departments of Communications and Justice;
  • Data and Privacy (1972), the report of the Swedish Committee on Automated Personal Systems; and
  • Databanks in a Free Society (1972), the report of the National Academy of Sciences Project on Computer Databanks."

It may not surprise you that the last of these was also authored by Alan Westin.

Other important privacy thinkers served on the committee itself. Among them was the chair, Willis Ware, an engineer at the RAND Corporation who was arguably one of the first privacy professionals. In 1966, in an internal memo titled "Future Computer Technology and Its Impact," Ware wrote:

"The computer will touch men everywhere and in every way, almost on a minute-to-minute basis. Every man will communicate through a computer whatever he does. It will change and reshape his life, modify his career and force him to accept a life of continuous change."

A life of continuous change. Well, that sure hits home.

Immediately after the publication of the HEW report, Ware brought its conclusions back to RAND. In another internal memo with the same name as the HEW report, Ware attempted to summarize the reasoning behind the final report, minus the bureaucratic language.

As he ponders how best to promote the spread of the FIPPs across public and private organizations, it is striking how relevant Ware's words remain to today's privacy and AI policy debates:

The principles just given are considered by the Committee as the minimum set of rights that should be available to the individual. The question becomes how to extend these rights to the citizen. An obvious mechanism, and one that has been suggested many times, is the creation of a centralized federal agency to regulate all automated personal data systems. Such an agency would be expected to register or license the operation of such systems, could establish specific safeguards as a condition of registration or licensure and would generally be the watchdog over all data banks public and private. Because systems used by the enormous number and variety of institutions dealing with personal data vary greatly in purposes, complexity, scope and administrative context, an agency to regulate, license and control such a breadth of activity would have to be both large in scale and pervasive. The procedures for regulation or licensing would become extremely complicated, costly and might unnecessarily interfere with desirable application of computers to record keeping. Moreover, such a regulatory body would be another instance of federal government intrusion into the affairs of industry, the citizen, and other levels of government.

Thus, the Committee has proposed a solution that was felt to provide for the citizen equally strong rights but at the same time to avoid the necessity for a regulatory body. It has recommended there be created by legislation a Code of Fair Information Practice applicable to all automated personal data systems. This Code would define "fair information practice" as adherence to specified safeguard requirements, would prohibit violation of any requirement as unfair information practice, would provide both civil and criminal penalties for unfair information practice, would provide for injunctions to prevent violation of any safeguard requirements, and, finally, would permit both individual and class actionable suits for actual liquidated and punitive damages.

Will the next 50 years bring us a comprehensive data privacy law?

Rather than holding your breath, I hope you'll join me today in pouring one out for the birthday of the FIPPs. Tomorrow we can try again to live up to these enduring principles.

Here's what else I'm thinking about

  • Tomorrow is a big day for state privacy laws. The comprehensive consumer privacy laws in Colorado and Connecticut come online on 1 July, as does the California Privacy Protection Agency's ability to enforce the amended California Consumer Privacy Act. However, rumor has it a pending court decision may enjoin the CPPA's ability to enforce its regulatory updates until a year after each is finalized, meaning the first batch may not be enforceable until the end of March 2024.
  • The National AI Advisory Committee concluded a series of briefing sessions with invited experts following the publication of its Year One Report. In the recent sessions, the NAIAC heard statements from groups representing many sectors invested in the future of AI, from privacy advocates to human rights organizations to national labor unions. The NAIAC committee and working groups members asked questions about algorithmic bias, trust in institutions and public education goals. While presenters applauded the work of the NAIAC so far, many pressed for NAIAC to fully and openly adopt the Blueprint for an Bill of Rights. Likewise, across the sessions there was a nearly unanimous high expectation placed on the newly formed Law Enforcement Subcommittee of NAIAC.
  • The Digital Advertising Alliance explained how its privacy principles "should" be applied to the Internet of Things. A document applying the DAA's Self-Regulatory Principles for Online Behavioral Advertising to connected devices has been a long time coming. But the "best practices" for the application of the Principles to IoT devices leaves an open question about the intended enforceability of the new code, which only appears to extend the principles of transparency and control to the connected device ecosystem.
  • Meta released a set of system cards to explain how its recommendation algorithms work. The new work product shows how the company plans to demonstrate its commitment to provide system-level transparency for machine learning systems and explain how users can impact the algorithmic outputs.

Under scrutiny

  • The "appification of daily life," and why we should resist it, was the subject of a recent blog from Georgetown's Meg Foster.
  • Foundation models were graded against the EU's proposed AI Act requirements by a Stanford team and found to be lacking, even without considering privacy standards.
  • Bias audits, such as those required under New York City local law 144, are problematized in a fascinating analysis by Jey Kumarasamy and Brenda Leong, CIPP/US for IAPP's The Privacy Advisor.

Upcoming happenings:

Please send feedback, updates and historical records to cobun@iapp.org.