Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

One of the most influential documents to the modern privacy profession is the U.S. National Institute of Standards and Technology's Privacy Framework, a guidance document first announced in 2018 before the final version of its first iteration was published January 2020. It serves as a voluntary roadmap for organizations to build effective operational governance processes for privacy risks throughout the data lifecycle — or as the subtitle of the framework put it, “a tool for improving privacy through enterprise risk management.”

This week, NIST unveiled a minor but significant draft update to the Privacy Framework. Professionals everywhere should take notice, especially given that comments on the initial public draft of the new “Privacy Framework 1.1” are welcomed through 13 June 2025.

NIST scientists — those tireless and, all too often, thankless organizers of the civilized world — affectionately refer to their Privacy Framework by the acronym “PFW.” After all, there is nothing NIST scientists love more than an acronym.

Since its development, the PFW has been referenced in organizational governance processes around the world. It is mapped throughout the IAPP’s Certified Information Privacy Manager certification, the professional credential for operationalizing privacy. In a development that shows how soft law can often influence black-letter law, the PFW has even been explicitly referenced in U.S. state privacy laws.

In Tennessee’s comprehensive consumer privacy law, it is incorporated as a sort of soft safe harbor. Organizations can establish an affirmative defense to violations of the Tennessee Information Privacy Act if they create, maintain, and comply with a written privacy program that “reasonably conforms” to the PFW, though the law also somewhat ambiguously allows for conformity with “other documented policies, standards, and procedures designed to safeguard consumer privacy.” Incidentally, Tennessee will be the next state privacy law to go into effect on 1 July 2025.

The latest update to the framework is meant to uphold the promise of the guidance as a living document. It helps organizations keep pace with the modernization of the privacy profession, responding to the ever more interconnected nature of digital governance and bridging critical gaps between privacy, cybersecurity, and the unique risks raised by emerging technologies like artificial intelligence.

Elevating privacy leadership

The updated Privacy Framework introduces a stand-alone category for privacy roles.

This marks an important step in ensuring privacy is not just a technical concern but an organizational priority. The framework also underscores that leadership must bear responsibility and accountability for privacy outcomes, though it removes specificity as to the seniority of leaders who should be included in privacy decisioning, likely to reflect flexibility between organizational risks.

Perhaps most critically, the framework update emphasizes that privacy roles must be adequately resourced via staff, funding, and technology. As before, the risk-management framework uses a “tier” progression. As NIST explains, “An organization can use the Tiers to communicate internally about resource allocations necessary to progress to a higher Tier or as general benchmarks to gauge progress in its capability to manage privacy risks.”

No matter where an organization’s risk-informed management determines its privacy program should fall on the tiers, the PFW provides a map for growing through the tiers from “partial” to “risk informed” to “repeatable” and finally to “adaptive.”

Integrating privacy and AI governance

With the growing adoption of AI across industries, privacy risks cannot be addressed in isolation. NIST’s update takes pains to reiterate that the framework is designed to be fully technology neutral. In general, it does not need to be adapted for each newly emerging technology, and it even removes some prior references to things like biased data analytics.

Nevertheless, Section 1.2.2 of the updated framework introduces guidance for establishing roles, responsibilities, and accountability specifically for AI-related privacy concerns. The draft includes a brief but comprehensive review of AI’s privacy risks.

“Privacy risks can arise, for example, when AI systems are trained on data that was collected without individuals’ consent or have missing or inadequate privacy safeguards. An AI system could also reveal information about individuals by estimating individuals’ personal attributes or through privacy attacks such as data reconstruction, prompt injection, or membership inference.… Systemic, computational and statistical, as well as human-cognitive biases can exist and persist in AI systems that make important decisions and predictions about people. In some cases, AI technology may be the key enabler of privacy risk (e.g., use of generative AI to create privacy-invasive images, video, or audio).”

The draft goes on to highlight how the PFW works hand-in-hand with other NIST risk-management publications to help with a holistic and risk-informed approach to AI governance. Alongside the PFW, organizations can consult the Cybersecurity Risk Framework 2.0 and the AI Risk Management Framework. As NIST explains, “Treating AI risks together with other enterprise risks (e.g., privacy, cybersecurity) supports integrated outcomes and organizational efficiencies.”

Aligning with broader governance trends

Speaking of the Cybersecurity Framework, one of the intentions behind this latest draft of the PFW is to align it with NIST’s February 2024 updates, which received a full “2.0” designation. Realigning the mapping of governance functions across frameworks helps to reflect the growing legal, regulatory and contractual demands across digital responsibility domains.

In PFW 1.1, the realignment mostly involves moving governance functions around in the “Core,” which restructures it to be more directly comparable to its cybersecurity sibling. NIST describes the function of the Core as enabling “a dialogue — from the executive level to the implementation/operations level — about important privacy protection activities and desired outcomes.”

The newly integrated approach of the frameworks seems to be designed to foster organizational efficiencies and more robust outcomes by addressing enterprise risks together. As the boundaries of privacy, cybersecurity, and AI governance blur, the frameworks we rely on must evolve in step.

NIST’s updates exemplify this progression, equipping organizations and professionals with the tools needed for a digital age defined by interconnectivity.

For an even deeper dive on how these separate frameworks can best work together, it is worth monitoring NIST’s ongoing work to build a “Data Governance and Management Profile.” The effort began last year and is meant to establish a resource that shows how to use all the data governance frameworks in a complementary way.

Your opinion matters

NIST includes a “note to reviewers” on page 4 of the draft update, requesting general feedback along with stakeholders’ opinions on some specific questions.

In the most substantive of these requests, the NIST team wonders how to better provide “implementation examples” for the PFW, seeking feedback on whether mapping task statements from the NIST Privacy Workforce Taxonomy to the Core would be a useful way to better show implementation.

As always, engagement with the production of these underlying frameworks can pay dividends for the establishment of benchmarks and best practices for the privacy profession.

Please send feedback, updates and acronyms to cobun@iapp.org

Cobun Zweifel-Keegan, CIPP/US, CIPM, is the managing director, Washington, D.C., for the IAPP.

This article originally appeared in The Daily Dashboard and U.S. Privacy Digest, free weekly IAPP newsletters. Subscriptions to this and other IAPP newsletters can be found here.