TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | What does AI need? A comprehensive federal data privacy and security law Related reading: IAPP releases AI Governance Professional Body of Knowledge

rss_feed

Artificial intelligence is ushering in a transformative era at an accelerated pace that could fundamentally alter how our society operates. And undoubtedly, it has caught the attention of many members of U.S. Congress, mostly due to fears about how the technology might be misused or the risks associated with it rather than its potential benefits. One key AI fear surrounds data privacy and security. 

Recent action on this front came 21 June, when U.S. Sen. Chuck Schumer, D-N.Y., revealed his SAFE Innovation framework for AI, which he believes will set the stage for bipartisan development of regulations that allow the industry to safely deploy AI without stifling innovation. Schumer's calls to protect innovation and solicit multiple perspectives and opinions before diving into AI regulation are encouraging. While Schumer's framework is more of a high-level view than a substantive policy proposal, he briefly mentioned privacy as an issue that would be explored through "Insight Forums."

Schumer is correct that data privacy is an area that intersects with AI and should not be ignored. However, data privacy and security risks exist outside of AI across various forms of tech and everyday practices like grocery shopping or driving to work. This makes taking broader action to protect privacy critical rather than looking only for solutions in the context of AI. The logical foundational step is to act on comprehensive data privacy and security legislation while ensuring it remains grounded in privacy principles.

AI's privacy and security risks

As is the case with any emerging tech, there are risks and concerns but also enormous promises and benefits. As automobiles advanced, safety features progressed to make driving safer, and the same approach should apply to AI. To prevent the stifling of innovation, tailored regulations and guidelines should be developed with a mind toward desired outcomes. Ideally, for example, responsible and effective AI that promotes economic growth and fosters scientific progress.

Ultimately, there are four main areas of concern to consider as both AI-specific and broader privacy actions move forward.

  • Privacy. AI training models rely on hundreds of gigabytes of data. That data is often obtained by data scraping at a mass scale. The data can contain sensitive information, creating a risk that someone's private information could be revealed in the AI's output. One standard some organizations use to prevent these types of privacy harms resulting from large datasets is by implementing differential privacy methods. Organizations will have to implement effective policies, too, such as restricting the use of AI when there is sensitive or confidential information involved. Of course, while there are data privacy risks associated with AI use, it can actually help achieve data privacy compliance. For example, AI can quickly identify sensitive information across a large data ecosystem and ensure that it is data mapped correctly and adequately secured or deleted. 
  • Security. Security breaches can affect AI systems just like any other system that is connected to the internet. The risk that internal and external threats could lead to leaked personal, sensitive, or proprietary information should be considered. It is critical that organizations retaining this type of information follow industry cybersecurity standards such as the U.S. National Institute of Standards and Technology's cybersecurity framework. It should also be noted that while there are risks AI will be used to execute cyberattacks, AI can also be deployed to secure networks, protect data privacy and actively search for vulnerabilities at a scale that humans cannot replicate.
  • Bias. Algorithms can be intentionally or unintentionally designed to have bias. Additionally, if the input data is biased, the output could be biased. This leads to concerns AI can cause harm through biased decision-making toward individuals in judicial proceedings, financial, education and employment opportunities, among other areas. Preventing these biased outcomes is important, and AI can be a vehicle to accomplish this. Some argue AI is actually more effective than humans at identifying discrimination. And ultimately, more data, not less, will be needed to create effective AI tools that weed out human and systemic biases. 
  • Mis/Disinformation. The intentional and unintentional misuse of AI by humans is a risk some raise. Bad actors can use AI to spread mis/disinformation or execute complex phishing schemes to commit fraud or trick individuals into providing sensitive information. But AI will also be useful for its powerful ability to identify dis/misinformation and content like deep fakes

The need for a comprehensive federal privacy and security law has become more urgent with the emergence of AI technology

Addressing privacy only in the context of AI ignores other important areas. AI tech, like large language models specifically, uses an immense amount of data — including sensitive data — scraped across the internet or provided to it, creating powerful tools for society, like generative AI. While AI will exponentially improve our society, these AI tools make it imperative that the U.S. protect all Americans' data by passing a comprehensive federal privacy and security law rather than taking a piecemeal approach.

Currently12 states have passed comprehensive state privacy laws, creating a complex privacy law landscape that leaves millions of Americans living in other states unprotected and industry — mainly medium and small businesses — overly burdened. A comprehensive federal data privacy and security law, like the American Data Privacy and Protection Act proposed in the 117th Congress, is one of the best ways to mitigate data privacy risks before data is collected and used to train AI. In 2022, the ADPPA made significant progress but ultimately stalled out, leaving millions of Americans unprotected.

AI could benefit from a comprehensive federal privacy and security law 

NIST's AI Framework mentions AI regulation should leverage outcome-based privacy regulatory frameworks to promote trustworthy and transparent AI technologies. Comprehensive privacy legislation would help address privacy risks present with AI through general data privacy principles. However, data privacy and security legislation should avoid becoming specific to AI. AI is best left to AI-specific frameworks and actions.

In the ADPPA, AI was implicated in several ways. For example, the ADPPA's Section 102 would require a covered entity to receive "affirmative express consent" from the user before transferring sensitive covered data to a third party — such as AI chatbots. In addition, a fundamental privacy principle included in ADPPA is data minimization, which limits what data an entity may collect, process or transfer. Data minimization helps limit the amount of data collected in the first place. Another ADPPA strength was its incorporation of essential privacy principles, including a data retention and disposal schedule that requires "… the deletion of covered data when such data is required to be deleted by law or is no longer necessary."

The provision specifically addressing AI was ADPPA's Section 207, which required large data holders to conduct an algorithms impact assessment when there is a "consequential risk of harm to an individual or group of individuals" when covered data is collected, processed or transferred. However, "consequential risk" is undefined, which could cause uncertainty for businesses about whether or not an algorithm is covered. Similarly, before deployment, an algorithm design evaluation is required when an entity develops a covered algorithm to process data "in furtherance of a consequential decision." These terms could be defined to prevent ambiguities that might chill innovation or result in confusion.  

It is essential not to let the most recent technological buzz around AI distract from the importance of broadly applicable data privacy and security protections. Those would not only help address concerns with AI but protect Americans across current and future advancements. Without federal action, companies should lean on privacy values to produce responsible and effective AI technology without stifling innovation while protecting privacy across all of their product types. 

Key Terms for AI Governance

This glossary provides definitions and explanations for some of the most common terms related to AI governance.

View Here


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.